UNPKG

43.7 kBMarkdownView Raw
1# bottleneck
2
3[![Downloads][npm-downloads]][npm-url]
4[![version][npm-version]][npm-url]
5[![License][npm-license]][license-url]
6
7
8Bottleneck is a lightweight and zero-dependency Task Scheduler and Rate Limiter for Node.js and the browser.
9
10Bottleneck is an easy solution as it adds very little complexity to your code. It is battle-hardened, reliable and production-ready and used on a large scale in private companies and open source software.
11
12It supports **Clustering**: it can rate limit jobs across multiple Node.js instances. It uses Redis and strictly atomic operations to stay reliable in the presence of unreliable clients and networks. It also supports *Redis Cluster* and *Redis Sentinel*.
13
14**[Upgrading from version 1?](#upgrading-to-v2)**
15
16<!-- toc -->
17
18- [Install](#install)
19- [Quick Start](#quick-start)
20 * [Gotchas & Common Mistakes](#gotchas--common-mistakes)
21- [Constructor](#constructor)
22- [Reservoir Intervals](#reservoir-intervals)
23- [`submit()`](#submit)
24- [`schedule()`](#schedule)
25- [`wrap()`](#wrap)
26- [Job Options](#job-options)
27- [Jobs Lifecycle](#jobs-lifecycle)
28- [Events](#events)
29- [Retries](#retries)
30- [`updateSettings()`](#updatesettings)
31- [`incrementReservoir()`](#incrementreservoir)
32- [`currentReservoir()`](#currentreservoir)
33- [`stop()`](#stop)
34- [`chain()`](#chain)
35- [Group](#group)
36- [Batching](#batching)
37- [Clustering](#clustering)
38- [Debugging Your Application](#debugging-your-application)
39- [Upgrading To v2](#upgrading-to-v2)
40- [Contributing](#contributing)
41
42<!-- tocstop -->
43
44## Install
45
46```
47npm install --save bottleneck
48```
49
50```js
51import Bottleneck from "bottleneck";
52
53// Note: To support older browsers and Node <6.0, you must import the ES5 bundle instead.
54var Bottleneck = require("bottleneck/es5");
55```
56
57## Quick Start
58
59### Step 1 of 3
60
61Most APIs have a rate limit. For example, to execute 3 requests per second:
62```js
63const limiter = new Bottleneck({
64 minTime: 333
65});
66```
67
68If there's a chance some requests might take longer than 333ms and you want to prevent more than 1 request from running at a time, add `maxConcurrent: 1`:
69```js
70const limiter = new Bottleneck({
71 maxConcurrent: 1,
72 minTime: 333
73});
74```
75
76`minTime` and `maxConcurrent` are enough for the majority of use cases. They work well together to ensure a smooth rate of requests. If your use case requires executing requests in **bursts** or every time a quota resets, look into [Reservoir Intervals](#reservoir-intervals).
77
78### Step 2 of 3
79
80#### ➤ Using promises?
81
82Instead of this:
83```js
84myFunction(arg1, arg2)
85.then((result) => {
86 /* handle result */
87});
88```
89Do this:
90```js
91limiter.schedule(() => myFunction(arg1, arg2))
92.then((result) => {
93 /* handle result */
94});
95```
96Or this:
97```js
98const wrapped = limiter.wrap(myFunction);
99
100wrapped(arg1, arg2)
101.then((result) => {
102 /* handle result */
103});
104```
105
106#### ➤ Using async/await?
107
108Instead of this:
109```js
110const result = await myFunction(arg1, arg2);
111```
112Do this:
113```js
114const result = await limiter.schedule(() => myFunction(arg1, arg2));
115```
116Or this:
117```js
118const wrapped = limiter.wrap(myFunction);
119
120const result = await wrapped(arg1, arg2);
121```
122
123#### ➤ Using callbacks?
124
125Instead of this:
126```js
127someAsyncCall(arg1, arg2, callback);
128```
129Do this:
130```js
131limiter.submit(someAsyncCall, arg1, arg2, callback);
132```
133
134### Step 3 of 3
135
136Remember...
137
138Bottleneck builds a queue of jobs and executes them as soon as possible. By default, the jobs will be executed in the order they were received.
139
140**Read the 'Gotchas' and you're good to go**. Or keep reading to learn about all the fine tuning and advanced options available. If your rate limits need to be enforced across a cluster of computers, read the [Clustering](#clustering) docs.
141
142[Need help debugging your application?](#debugging-your-application)
143
144Instead of throttling maybe [you want to batch up requests](#batching) into fewer calls?
145
146### Gotchas & Common Mistakes
147
148* Make sure the function you pass to `schedule()` or `wrap()` only returns once **all the work it does** has completed.
149
150Instead of this:
151```js
152limiter.schedule(() => {
153 tasksArray.forEach(x => processTask(x));
154 // BAD, we return before our processTask() functions are finished processing!
155});
156```
157Do this:
158```js
159limiter.schedule(() => {
160 const allTasks = tasksArray.map(x => processTask(x));
161 // GOOD, we wait until all tasks are done.
162 return Promise.all(allTasks);
163});
164```
165
166* If you're passing an object's method as a job, you'll probably need to `bind()` the object:
167```js
168// instead of this:
169limiter.schedule(object.doSomething);
170// do this:
171limiter.schedule(object.doSomething.bind(object));
172// or, wrap it in an arrow function instead:
173limiter.schedule(() => object.doSomething());
174```
175
176* Bottleneck requires Node 6+ to function. However, an ES5 build is included: `var Bottleneck = require("bottleneck/es5");`.
177
178* Make sure you're catching `"error"` events emitted by your limiters!
179
180* Consider setting a `maxConcurrent` value instead of leaving it `null`. This can help your application's performance, especially if you think the limiter's queue might become very long.
181
182* If you plan on using `priorities`, make sure to set a `maxConcurrent` value.
183
184* **When using `submit()`**, if a callback isn't necessary, you must pass `null` or an empty function instead. It will not work otherwise.
185
186* **When using `submit()`**, make sure all the jobs will eventually complete by calling their callback, or set an [`expiration`](#job-options). Even if you submitted your job with a `null` callback , it still needs to call its callback. This is particularly important if you are using a `maxConcurrent` value that isn't `null` (unlimited), otherwise those not completed jobs will be clogging up the limiter and no new jobs will be allowed to run. It's safe to call the callback more than once, subsequent calls are ignored.
187
188## Docs
189
190### Constructor
191
192```js
193const limiter = new Bottleneck({/* options */});
194```
195
196Basic options:
197
198| Option | Default | Description |
199|--------|---------|-------------|
200| `maxConcurrent` | `null` (unlimited) | How many jobs can be executing at the same time. Consider setting a value instead of leaving it `null`, it can help your application's performance, especially if you think the limiter's queue might get very long. |
201| `minTime` | `0` ms | How long to wait after launching a job before launching another one. |
202| `highWater` | `null` (unlimited) | How long can the queue be? When the queue length exceeds that value, the selected `strategy` is executed to shed the load. |
203| `strategy` | `Bottleneck.strategy.LEAK` | Which strategy to use when the queue gets longer than the high water mark. [Read about strategies](#strategies). Strategies are never executed if `highWater` is `null`. |
204| `penalty` | `15 * minTime`, or `5000` when `minTime` is `0` | The `penalty` value used by the `BLOCK` strategy. |
205| `reservoir` | `null` (unlimited) | How many jobs can be executed before the limiter stops executing jobs. If `reservoir` reaches `0`, no jobs will be executed until it is no longer `0`. New jobs will still be queued up. |
206| `reservoirRefreshInterval` | `null` (disabled) | Every `reservoirRefreshInterval` milliseconds, the `reservoir` value will be automatically updated to the value of `reservoirRefreshAmount`. The `reservoirRefreshInterval` value should be a [multiple of 250 (5000 for Clustering)](https://github.com/SGrondin/bottleneck/issues/88). |
207| `reservoirRefreshAmount` | `null` (disabled) | The value to set `reservoir` to when `reservoirRefreshInterval` is in use. |
208| `reservoirIncreaseInterval` | `null` (disabled) | Every `reservoirIncreaseInterval` milliseconds, the `reservoir` value will be automatically incremented by `reservoirIncreaseAmount`. The `reservoirIncreaseInterval` value should be a [multiple of 250 (5000 for Clustering)](https://github.com/SGrondin/bottleneck/issues/88). |
209| `reservoirIncreaseAmount` | `null` (disabled) | The increment applied to `reservoir` when `reservoirIncreaseInterval` is in use. |
210| `reservoirIncreaseMaximum` | `null` (disabled) | The maximum value that `reservoir` can reach when `reservoirIncreaseInterval` is in use. |
211| `Promise` | `Promise` (built-in) | This lets you override the Promise library used by Bottleneck. |
212
213
214### Reservoir Intervals
215
216Reservoir Intervals let you execute requests in bursts, by automatically controlling the limiter's `reservoir` value. The `reservoir` is simply the number of jobs the limiter is allowed to execute. Once the value reaches 0, it stops starting new jobs.
217
218There are 2 types of Reservoir Intervals: Refresh Intervals and Increase Intervals.
219
220#### Refresh Interval
221
222In this example, we throttle to 100 requests every 60 seconds:
223
224```js
225const limiter = new Bottleneck({
226 reservoir: 100, // initial value
227 reservoirRefreshAmount: 100,
228 reservoirRefreshInterval: 60 * 1000, // must be divisible by 250
229
230 // also use maxConcurrent and/or minTime for safety
231 maxConcurrent: 1,
232 minTime: 333 // pick a value that makes sense for your use case
233});
234```
235`reservoir` is a counter decremented every time a job is launched, we set its initial value to 100. Then, every `reservoirRefreshInterval` (60000 ms), `reservoir` is automatically updated to be equal to the `reservoirRefreshAmount` (100).
236
237#### Increase Interval
238
239In this example, we throttle jobs to meet the Shopify API Rate Limits. Users are allowed to send 40 requests initially, then every second grants 2 more requests up to a maximum of 40.
240
241```js
242const limiter = new Bottleneck({
243 reservoir: 40, // initial value
244 reservoirIncreaseAmount: 2,
245 reservoirIncreaseInterval: 1000, // must be divisible by 250
246 reservoirIncreaseMaximum: 40,
247
248 // also use maxConcurrent and/or minTime for safety
249 maxConcurrent: 5,
250 minTime: 250 // pick a value that makes sense for your use case
251});
252```
253
254#### Warnings
255
256Reservoir Intervals are an advanced feature, please take the time to read and understand the following warnings.
257
258- **Reservoir Intervals are not a replacement for `minTime` and `maxConcurrent`.** It's strongly recommended to also use `minTime` and/or `maxConcurrent` to spread out the load. For example, suppose a lot of jobs are queued up because the `reservoir` is 0. Every time the Refresh Interval is triggered, a number of jobs equal to `reservoirRefreshAmount` will automatically be launched, all at the same time! To prevent this flooding effect and keep your application running smoothly, use `minTime` and `maxConcurrent` to **stagger** the jobs.
259
260- **The Reservoir Interval starts from the moment the limiter is created**. Let's suppose we're using `reservoirRefreshAmount: 5`. If you happen to add 10 jobs just 1ms before the refresh is triggered, the first 5 will run immediately, then 1ms later it will refresh the reservoir value and that will make the last 5 also run right away. It will have run 10 jobs in just over 1ms no matter what your reservoir interval was!
261
262- **Reservoir Intervals prevent a limiter from being garbage collected.** Call `limiter.disconnect()` to clear the interval and allow the memory to be freed. However, it's not necessary to call `.disconnect()` to allow the Node.js process to exit.
263
264### submit()
265
266Adds a job to the queue. This is the callback version of `schedule()`.
267```js
268limiter.submit(someAsyncCall, arg1, arg2, callback);
269```
270You can pass `null` instead of an empty function if there is no callback, but `someAsyncCall` still needs to call **its** callback to let the limiter know it has completed its work.
271
272`submit()` can also accept [advanced options](#job-options).
273
274### schedule()
275
276Adds a job to the queue. This is the Promise and async/await version of `submit()`.
277```js
278const fn = function(arg1, arg2) {
279 return httpGet(arg1, arg2); // Here httpGet() returns a promise
280};
281
282limiter.schedule(fn, arg1, arg2)
283.then((result) => {
284 /* ... */
285});
286```
287In other words, `schedule()` takes a function **fn** and a list of arguments. `schedule()` returns a promise that will be executed according to the rate limits.
288
289`schedule()` can also accept [advanced options](#job-options).
290
291Here's another example:
292```js
293// suppose that `client.get(url)` returns a promise
294
295const url = "https://wikipedia.org";
296
297limiter.schedule(() => client.get(url))
298.then(response => console.log(response.body));
299```
300
301### wrap()
302
303Takes a function that returns a promise. Returns a function identical to the original, but rate limited.
304```js
305const wrapped = limiter.wrap(fn);
306
307wrapped()
308.then(function (result) {
309 /* ... */
310})
311.catch(function (error) {
312 // Bottleneck might need to fail the job even if the original function can never fail.
313 // For example, your job is taking longer than the `expiration` time you've set.
314});
315```
316
317### Job Options
318
319`submit()`, `schedule()`, and `wrap()` all accept advanced options.
320```js
321// Submit
322limiter.submit({/* options */}, someAsyncCall, arg1, arg2, callback);
323
324// Schedule
325limiter.schedule({/* options */}, fn, arg1, arg2);
326
327// Wrap
328const wrapped = limiter.wrap(fn);
329wrapped.withOptions({/* options */}, arg1, arg2);
330```
331
332| Option | Default | Description |
333|--------|---------|-------------|
334| `priority` | `5` | A priority between `0` and `9`. A job with a priority of `4` will be queued ahead of a job with a priority of `5`. **Important:** You must set a low `maxConcurrent` value for priorities to work, otherwise there is nothing to queue because jobs will be be scheduled immediately! |
335| `weight` | `1` | Must be an integer equal to or higher than `0`. The `weight` is what increases the number of running jobs (up to `maxConcurrent`) and decreases the `reservoir` value. |
336| `expiration` | `null` (unlimited) | The number of milliseconds a job is given to complete. Jobs that execute for longer than `expiration` ms will be failed with a `BottleneckError`. |
337| `id` | `<no-id>` | You should give an ID to your jobs, it helps with [debugging](#debugging-your-application). |
338
339### Strategies
340
341A strategy is a simple algorithm that is executed every time adding a job would cause the number of queued jobs to exceed `highWater`. Strategies are never executed if `highWater` is `null`.
342
343#### Bottleneck.strategy.LEAK
344When adding a new job to a limiter, if the queue length reaches `highWater`, drop the oldest job with the lowest priority. This is useful when jobs that have been waiting for too long are not important anymore. If all the queued jobs are more important (based on their `priority` value) than the one being added, it will not be added.
345
346#### Bottleneck.strategy.OVERFLOW_PRIORITY
347Same as `LEAK`, except it will only drop jobs that are *less important* than the one being added. If all the queued jobs are as or more important than the new one, it will not be added.
348
349#### Bottleneck.strategy.OVERFLOW
350When adding a new job to a limiter, if the queue length reaches `highWater`, do not add the new job. This strategy totally ignores priority levels.
351
352#### Bottleneck.strategy.BLOCK
353When adding a new job to a limiter, if the queue length reaches `highWater`, the limiter falls into "blocked mode". All queued jobs are dropped and no new jobs will be accepted until the limiter unblocks. It will unblock after `penalty` milliseconds have passed without receiving a new job. `penalty` is equal to `15 * minTime` (or `5000` if `minTime` is `0`) by default. This strategy is ideal when bruteforce attacks are to be expected. This strategy totally ignores priority levels.
354
355
356### Jobs lifecycle
357
3581. **Received**. Your new job has been added to the limiter. Bottleneck needs to check whether it can be accepted into the queue.
3592. **Queued**. Bottleneck has accepted your job, but it can not tell at what exact timestamp it will run yet, because it is dependent on previous jobs.
3603. **Running**. Your job is not in the queue anymore, it will be executed after a delay that was computed according to your `minTime` setting.
3614. **Executing**. Your job is executing its code.
3625. **Done**. Your job has completed.
363
364**Note:** By default, Bottleneck does not keep track of DONE jobs, to save memory. You can enable this feature by passing `trackDoneStatus: true` as an option when creating a limiter.
365
366#### counts()
367
368```js
369const counts = limiter.counts();
370
371console.log(counts);
372/*
373{
374 RECEIVED: 0,
375 QUEUED: 0,
376 RUNNING: 0,
377 EXECUTING: 0,
378 DONE: 0
379}
380*/
381```
382
383Returns an object with the current number of jobs per status in the limiter.
384
385#### jobStatus()
386
387```js
388console.log(limiter.jobStatus("some-job-id"));
389// Example: QUEUED
390```
391
392Returns the status of the job with the provided job id **in the limiter**. Returns `null` if no job with that id exist.
393
394#### jobs()
395
396```js
397console.log(limiter.jobs("RUNNING"));
398// Example: ['id1', 'id2']
399```
400
401Returns an array of all the job ids with the specified status **in the limiter**. Not passing a status string returns all the known ids.
402
403#### queued()
404
405```js
406const count = limiter.queued(priority);
407
408console.log(count);
409```
410
411`priority` is optional. Returns the number of `QUEUED` jobs with the given `priority` level. Omitting the `priority` argument returns the total number of queued jobs **in the limiter**.
412
413#### clusterQueued()
414
415```js
416const count = await limiter.clusterQueued();
417
418console.log(count);
419```
420
421Returns the number of `QUEUED` jobs **in the Cluster**.
422
423#### empty()
424
425```js
426if (limiter.empty()) {
427 // do something...
428}
429```
430
431Returns a boolean which indicates whether there are any `RECEIVED` or `QUEUED` jobs **in the limiter**.
432
433#### running()
434
435```js
436limiter.running()
437.then((count) => console.log(count));
438```
439
440Returns a promise that returns the **total weight** of the `RUNNING` and `EXECUTING` jobs **in the Cluster**.
441
442#### done()
443
444```js
445limiter.done()
446.then((count) => console.log(count));
447```
448
449Returns a promise that returns the **total weight** of `DONE` jobs **in the Cluster**. Does not require passing the `trackDoneStatus: true` option.
450
451#### check()
452
453```js
454limiter.check()
455.then((wouldRunNow) => console.log(wouldRunNow));
456```
457Checks if a new job would be executed immediately if it was submitted now. Returns a promise that returns a boolean.
458
459
460### Events
461
462__'error'__
463```js
464limiter.on("error", function (error) {
465 /* handle errors here */
466});
467```
468
469The two main causes of error events are: uncaught exceptions in your event handlers, and network errors when Clustering is enabled.
470
471__'failed'__
472```js
473limiter.on("failed", function (error, jobInfo) {
474 // This will be called every time a job fails.
475});
476```
477
478__'retry'__
479
480See [Retries](#retries) to learn how to automatically retry jobs.
481```js
482limiter.on("retry", function (message, jobInfo) {
483 // This will be called every time a job is retried.
484});
485```
486
487__'empty'__
488```js
489limiter.on("empty", function () {
490 // This will be called when `limiter.empty()` becomes true.
491});
492```
493
494__'idle'__
495```js
496limiter.on("idle", function () {
497 // This will be called when `limiter.empty()` is `true` and `limiter.running()` is `0`.
498});
499```
500
501__'dropped'__
502```js
503limiter.on("dropped", function (dropped) {
504 // This will be called when a strategy was triggered.
505 // The dropped request is passed to this event listener.
506});
507```
508
509__'depleted'__
510```js
511limiter.on("depleted", function (empty) {
512 // This will be called every time the reservoir drops to 0.
513 // The `empty` (boolean) argument indicates whether `limiter.empty()` is currently true.
514});
515```
516
517__'debug'__
518```js
519limiter.on("debug", function (message, data) {
520 // Useful to figure out what the limiter is doing in real time
521 // and to help debug your application
522});
523```
524
525__'received'__
526__'queued'__
527__'scheduled'__
528__'executing'__
529__'done'__
530```js
531limiter.on("queued", function (info) {
532 // This event is triggered when a job transitions from one Lifecycle stage to another
533});
534```
535
536See [Jobs Lifecycle](#jobs-lifecycle) for more information.
537
538These Lifecycle events are not triggered for jobs located on another limiter in a Cluster, for performance reasons.
539
540#### Other event methods
541
542Use `removeAllListeners()` with an optional event name as first argument to remove listeners.
543
544Use `.once()` instead of `.on()` to only receive a single event.
545
546
547### Retries
548
549The following example:
550```js
551const limiter = new Bottleneck();
552
553// Listen to the "failed" event
554limiter.on("failed", async (error, jobInfo) => {
555 const id = jobInfo.options.id;
556 console.warn(`Job ${id} failed: ${error}`);
557
558 if (jobInfo.retryCount === 0) { // Here we only retry once
559 console.log(`Retrying job ${id} in 25ms!`);
560 return 25;
561 }
562});
563
564// Listen to the "retry" event
565limiter.on("retry", (error, jobInfo) => console.log(`Now retrying ${jobInfo.options.id}`));
566
567const main = async function () {
568 let executions = 0;
569
570 // Schedule one job
571 const result = await limiter.schedule({ id: 'ABC123' }, async () => {
572 executions++;
573 if (executions === 1) {
574 throw new Error("Boom!");
575 } else {
576 return "Success!";
577 }
578 });
579
580 console.log(`Result: ${result}`);
581}
582
583main();
584```
585will output
586```
587Job ABC123 failed: Error: Boom!
588Retrying job ABC123 in 25ms!
589Now retrying ABC123
590Result: Success!
591```
592To re-run your job, simply return an integer from the `'failed'` event handler. The number returned is how many milliseconds to wait before retrying it. Return `0` to retry it immediately.
593
594**IMPORTANT:** When you ask the limiter to retry a job it will not send it back into the queue. It will stay in the `EXECUTING` [state](#jobs-lifecycle) until it succeeds or until you stop retrying it. **This means that it counts as a concurrent job for `maxConcurrent` even while it's just waiting to be retried.** The number of milliseconds to wait ignores your `minTime` settings.
595
596
597### updateSettings()
598
599```js
600limiter.updateSettings(options);
601```
602The options are the same as the [limiter constructor](#constructor).
603
604**Note:** Changes don't affect `SCHEDULED` jobs.
605
606### incrementReservoir()
607
608```js
609limiter.incrementReservoir(incrementBy);
610```
611Returns a promise that returns the new reservoir value.
612
613### currentReservoir()
614
615```js
616limiter.currentReservoir()
617.then((reservoir) => console.log(reservoir));
618```
619Returns a promise that returns the current reservoir value.
620
621### stop()
622
623The `stop()` method is used to safely shutdown a limiter. It prevents any new jobs from being added to the limiter and waits for all `EXECUTING` jobs to complete.
624
625```js
626limiter.stop(options)
627.then(() => {
628 console.log("Shutdown completed!")
629});
630```
631
632`stop()` returns a promise that resolves once all the `EXECUTING` jobs have completed and, if desired, once all non-`EXECUTING` jobs have been dropped.
633
634| Option | Default | Description |
635|--------|---------|-------------|
636| `dropWaitingJobs` | `true` | When `true`, drop all the `RECEIVED`, `QUEUED` and `RUNNING` jobs. When `false`, allow those jobs to complete before resolving the Promise returned by this method. |
637| `dropErrorMessage` | `This limiter has been stopped.` | The error message used to drop jobs when `dropWaitingJobs` is `true`. |
638| `enqueueErrorMessage` | `This limiter has been stopped and cannot accept new jobs.` | The error message used to reject a job added to the limiter after `stop()` has been called. |
639
640### chain()
641
642Tasks that are ready to be executed will be added to that other limiter. Suppose you have 2 types of tasks, A and B. They both have their own limiter with their own settings, but both must also follow a global limiter G:
643```js
644const limiterA = new Bottleneck( /* some settings */ );
645const limiterB = new Bottleneck( /* some different settings */ );
646const limiterG = new Bottleneck( /* some global settings */ );
647
648limiterA.chain(limiterG);
649limiterB.chain(limiterG);
650
651// Requests added to limiterA must follow the A and G rate limits.
652// Requests added to limiterB must follow the B and G rate limits.
653// Requests added to limiterG must follow the G rate limits.
654```
655
656To unchain, call `limiter.chain(null);`.
657
658## Group
659
660The `Group` feature of Bottleneck manages many limiters automatically for you. It creates limiters dynamically and transparently.
661
662Let's take a DNS server as an example of how Bottleneck can be used. It's a service that sees a lot of abuse and where incoming DNS requests need to be rate limited. Bottleneck is so tiny, it's acceptable to create one limiter for each origin IP, even if it means creating thousands of limiters. The `Group` feature is perfect for this use case. Create one Group and use the origin IP to rate limit each IP independently. Each call with the same key (IP) will be routed to the same underlying limiter. A Group is created like a limiter:
663
664
665```js
666const group = new Bottleneck.Group(options);
667```
668
669The `options` object will be used for every limiter created by the Group.
670
671The Group is then used with the `.key(str)` method:
672
673```js
674// In this example, the key is an IP
675group.key("77.66.54.32").schedule(() => {
676 /* process the request */
677});
678```
679
680#### key()
681
682* `str` : The key to use. All jobs added with the same key will use the same underlying limiter. *Default: `""`*
683
684The return value of `.key(str)` is a limiter. If it doesn't already exist, it is generated for you. Calling `key()` is how limiters are created inside a Group.
685
686Limiters that have been idle for longer than 5 minutes are deleted to avoid memory leaks, this value can be changed by passing a different `timeout` option, in milliseconds.
687
688#### on("created")
689
690```js
691group.on("created", (limiter, key) => {
692 console.log("A new limiter was created for key: " + key)
693
694 // Prepare the limiter, for example we'll want to listen to its "error" events!
695 limiter.on("error", (err) => {
696 // Handle errors here
697 })
698});
699```
700
701Listening for the `"created"` event is the recommended way to set up a new limiter. Your event handler is executed before `key()` returns the newly created limiter.
702
703#### updateSettings()
704
705```js
706const group = new Bottleneck.Group({ maxConcurrent: 2, minTime: 250 });
707group.updateSettings({ minTime: 500 });
708```
709After executing the above commands, **new limiters** will be created with `{ maxConcurrent: 2, minTime: 500 }`.
710
711
712#### deleteKey()
713
714* `str`: The key for the limiter to delete.
715
716Manually deletes the limiter at the specified key. When using Clustering, the Redis data is immediately deleted and the other Groups in the Cluster will eventually delete their local key automatically, unless it is still being used.
717
718#### keys()
719
720Returns an array containing all the keys in the Group.
721
722#### clusterKeys()
723
724Same as `group.keys()`, but returns all keys in this Group ID across the Cluster.
725
726#### limiters()
727
728```js
729const limiters = group.limiters();
730
731console.log(limiters);
732// [ { key: "some key", limiter: <limiter> }, { key: "some other key", limiter: <some other limiter> } ]
733```
734
735## Batching
736
737Some APIs can accept multiple operations in a single call. Bottleneck's Batching feature helps you take advantage of those APIs:
738```js
739const batcher = new Bottleneck.Batcher({
740 maxTime: 1000,
741 maxSize: 10
742});
743
744batcher.on("batch", (batch) => {
745 console.log(batch); // ["some-data", "some-other-data"]
746
747 // Handle batch here
748});
749
750batcher.add("some-data");
751batcher.add("some-other-data");
752```
753
754`batcher.add()` returns a Promise that resolves once the request has been flushed to a `"batch"` event.
755
756| Option | Default | Description |
757|--------|---------|-------------|
758| `maxTime` | `null` (unlimited) | Maximum acceptable time (in milliseconds) a request can have to wait before being flushed to the `"batch"` event. |
759| `maxSize` | `null` (unlimited) | Maximum number of requests in a batch. |
760
761Batching doesn't throttle requests, it only groups them up optimally according to your `maxTime` and `maxSize` settings.
762
763## Clustering
764
765Clustering lets many limiters access the same shared state, stored in Redis. Changes to the state are Atomic, Consistent and Isolated (and fully [ACID](https://en.wikipedia.org/wiki/ACID) with the right [Durability](https://redis.io/topics/persistence) configuration), to eliminate any chances of race conditions or state corruption. Your settings, such as `maxConcurrent`, `minTime`, etc., are shared across the whole cluster, which means —for example— that `{ maxConcurrent: 5 }` guarantees no more than 5 jobs can ever run at a time in the entire cluster of limiters. 100% of Bottleneck's features are supported in Clustering mode. Enabling Clustering is as simple as changing a few settings. It's also a convenient way to store or export state for later use.
766
767Bottleneck will attempt to spread load evenly across limiters.
768
769### Enabling Clustering
770
771First, add `redis` or `ioredis` to your application's dependencies:
772```bash
773# NodeRedis (https://github.com/NodeRedis/node_redis)
774npm install --save redis
775
776# or ioredis (https://github.com/luin/ioredis)
777npm install --save ioredis
778```
779Then create a limiter or a Group:
780```js
781const limiter = new Bottleneck({
782 /* Some basic options */
783 maxConcurrent: 5,
784 minTime: 500
785 id: "my-super-app" // All limiters with the same id will be clustered together
786
787 /* Clustering options */
788 datastore: "redis", // or "ioredis"
789 clearDatastore: false,
790 clientOptions: {
791 host: "127.0.0.1",
792 port: 6379
793
794 // Redis client options
795 // Using NodeRedis? See https://github.com/NodeRedis/node_redis#options-object-properties
796 // Using ioredis? See https://github.com/luin/ioredis/blob/master/API.md#new-redisport-host-options
797 }
798});
799```
800
801| Option | Default | Description |
802|--------|---------|-------------|
803| `datastore` | `"local"` | Where the limiter stores its internal state. The default (`"local"`) keeps the state in the limiter itself. Set it to `"redis"` or `"ioredis"` to enable Clustering. |
804| `clearDatastore` | `false` | When set to `true`, on initial startup, the limiter will wipe any existing Bottleneck state data on the Redis db. |
805| `clientOptions` | `{}` | This object is passed directly to the redis client library you've selected. |
806| `clusterNodes` | `null` | **ioredis only.** When `clusterNodes` is not null, the client will be instantiated by calling `new Redis.Cluster(clusterNodes, clientOptions)` instead of `new Redis(clientOptions)`. |
807| `timeout` | `null` (no TTL) | The Redis TTL in milliseconds ([TTL](https://redis.io/commands/ttl)) for the keys created by the limiter. When `timeout` is set, the limiter's state will be automatically removed from Redis after `timeout` milliseconds of inactivity. |
808| `Redis` | `null` | Overrides the import/require of the redis/ioredis library. You shouldn't need to set this option unless your application is failing to start due to a failure to require/import the client library. |
809
810**Note: When using Groups**, the `timeout` option has a default of `300000` milliseconds and the generated limiters automatically receive an `id` with the pattern `${group.id}-${KEY}`.
811
812**Note:** If you are seeing a runtime error due to the `require()` function not being able to load `redis`/`ioredis`, then directly pass the module as the `Redis` option. Example:
813```js
814import Redis from "ioredis"
815
816const limiter = new Bottleneck({
817 id: "my-super-app",
818 datastore: "ioredis",
819 clientOptions: { host: '12.34.56.78', port: 6379 },
820 Redis
821});
822```
823Unfortunately, this is a side effect of having to disable inlining, which is necessary to make Bottleneck easy to use in the browser.
824
825### Important considerations when Clustering
826
827The first limiter connecting to Redis will store its [constructor options](#constructor) on Redis and all subsequent limiters will be using those settings. You can alter the constructor options used by all the connected limiters by calling `updateSettings()`. The `clearDatastore` option instructs a new limiter to wipe any previous Bottleneck data (for that `id`), including previously stored settings.
828
829Queued jobs are **NOT** stored on Redis. They are local to each limiter. Exiting the Node.js process will lose those jobs. This is because Bottleneck has no way to propagate the JS code to run a job across a different Node.js process than the one it originated on. Bottleneck doesn't keep track of the queue contents of the limiters on a cluster for performance and reliability reasons. You can use something like [`BeeQueue`](https://github.com/bee-queue/bee-queue) in addition to Bottleneck to get around this limitation.
830
831Due to the above, functionality relying on the queue length happens purely locally:
832- Priorities are local. A higher priority job will run before a lower priority job **on the same limiter**. Another limiter on the cluster might run a lower priority job before our higher priority one.
833- Assuming constant priority levels, Bottleneck guarantees that jobs will be run in the order they were received **on the same limiter**. Another limiter on the cluster might run a job received later before ours runs.
834- `highWater` and load shedding ([strategies](#strategies)) are per limiter. However, one limiter entering Blocked mode will put the entire cluster in Blocked mode until `penalty` milliseconds have passed. See [Strategies](#strategies).
835- The `"empty"` event is triggered when the (local) queue is empty.
836- The `"idle"` event is triggered when the (local) queue is empty *and* no jobs are currently running anywhere in the cluster.
837
838You must work around these limitations in your application code if they are an issue to you. The `publish()` method could be useful here.
839
840The current design guarantees reliability, is highly performant and lets limiters come and go. Your application can scale up or down, and clients can be disconnected at any time without issues.
841
842It is **strongly recommended** that you give an `id` to every limiter and Group since it is used to build the name of your limiter's Redis keys! Limiters with the same `id` inside the same Redis db will be sharing the same datastore.
843
844It is **strongly recommended** that you set an `expiration` (See [Job Options](#job-options)) *on every job*, since that lets the cluster recover from crashed or disconnected clients. Otherwise, a client crashing while executing a job would not be able to tell the cluster to decrease its number of "running" jobs. By using expirations, those lost jobs are automatically cleared after the specified time has passed. Using expirations is essential to keeping a cluster reliable in the face of unpredictable application bugs, network hiccups, and so on.
845
846Network latency between Node.js and Redis is not taken into account when calculating timings (such as `minTime`). To minimize the impact of latency, Bottleneck only performs a single Redis call per [lifecycle transition](#jobs-lifecycle). Keeping the Redis server close to your limiters will help you get a more consistent experience. Keeping the system time consistent across all clients will also help.
847
848It is **strongly recommended** to [set up an `"error"` listener](#events) on all your limiters and on your Groups.
849
850### Clustering Methods
851
852The `ready()`, `publish()` and `clients()` methods also exist when using the `local` datastore, for code compatibility reasons: code written for `redis`/`ioredis` won't break with `local`.
853
854#### ready()
855
856This method returns a promise that resolves once the limiter is connected to Redis.
857
858As of v2.9.0, it's no longer necessary to wait for `.ready()` to resolve before issuing commands to a limiter. The commands will be queued until the limiter successfully connects. Make sure to listen to the `"error"` event to handle connection errors.
859
860```js
861const limiter = new Bottleneck({/* options */});
862
863limiter.on("error", (err) => {
864 // handle network errors
865});
866
867limiter.ready()
868.then(() => {
869 // The limiter is ready
870});
871```
872
873#### publish(message)
874
875This method broadcasts the `message` string to every limiter in the Cluster. It returns a promise.
876```js
877const limiter = new Bottleneck({/* options */});
878
879limiter.on("message", (msg) => {
880 console.log(msg); // prints "this is a string"
881});
882
883limiter.publish("this is a string");
884```
885
886To send objects, stringify them first:
887```js
888limiter.on("message", (msg) => {
889 console.log(JSON.parse(msg).hello) // prints "world"
890});
891
892limiter.publish(JSON.stringify({ hello: "world" }));
893```
894
895#### clients()
896
897If you need direct access to the redis clients, use `.clients()`:
898```js
899console.log(limiter.clients());
900// { client: <Redis Client>, subscriber: <Redis Client> }
901```
902
903### Additional Clustering information
904
905- Bottleneck is compatible with [Redis Clusters](https://redis.io/topics/cluster-tutorial), but you must use the `ioredis` datastore and the `clusterNodes` option.
906- Bottleneck is compatible with Redis Sentinel, but you must use the `ioredis` datastore.
907- Bottleneck's data is stored in Redis keys starting with `b_`. It also uses pubsub channels starting with `b_` It will not interfere with any other data stored on the server.
908- Bottleneck loads a few Lua scripts on the Redis server using the `SCRIPT LOAD` command. These scripts only take up a few Kb of memory. Running the `SCRIPT FLUSH` command will cause any connected limiters to experience critical errors until a new limiter connects to Redis and loads the scripts again.
909- The Lua scripts are highly optimized and designed to use as few resources as possible.
910
911### Managing Redis Connections
912
913Bottleneck needs to create 2 Redis Clients to function, one for normal operations and one for pubsub subscriptions. These 2 clients are kept in a `Bottleneck.RedisConnection` (NodeRedis) or a `Bottleneck.IORedisConnection` (ioredis) object, referred to as the Connection object.
914
915By default, every Group and every standalone limiter (a limiter not created by a Group) will create their own Connection object, but it is possible to manually control this behavior. In this example, every Group and limiter is sharing the same Connection object and therefore the same 2 clients:
916```js
917const connection = new Bottleneck.RedisConnection({
918 clientOptions: {/* NodeRedis/ioredis options */}
919 // ioredis also accepts `clusterNodes` here
920});
921
922
923const limiter = new Bottleneck({ connection: connection });
924const group = new Bottleneck.Group({ connection: connection });
925```
926You can access and reuse the Connection object of any Group or limiter:
927```js
928const group = new Bottleneck.Group({ connection: limiter.connection });
929```
930When a Connection object is created manually, the connectivity `"error"` events are emitted on the Connection itself.
931```js
932connection.on("error", (err) => { /* handle connectivity errors here */ });
933```
934If you already have a NodeRedis/ioredis client, you can ask Bottleneck to reuse it, although currently the Connection object will still create a second client for pubsub operations:
935```js
936import Redis from "redis";
937const client = new Redis.createClient({/* options */});
938
939const connection = new Bottleneck.RedisConnection({
940 // `clientOptions` and `clusterNodes` will be ignored since we're passing a raw client
941 client: client
942});
943
944const limiter = new Bottleneck({ connection: connection });
945const group = new Bottleneck.Group({ connection: connection });
946```
947Depending on your application, using more clients can improve performance.
948
949Use the `disconnect(flush)` method to close the Redis clients.
950```js
951limiter.disconnect();
952group.disconnect();
953```
954If you created the Connection object manually, you need to call `connection.disconnect()` instead, for safety reasons.
955
956## Debugging your application
957
958Debugging complex scheduling logic can be difficult, especially when priorities, weights, and network latency all interact with one another.
959
960If your application is not behaving as expected, start by making sure you're catching `"error"` [events emitted](#events) by your limiters and your Groups. Those errors are most likely uncaught exceptions from your application code.
961
962Make sure you've read the ['Gotchas'](#gotchas) section.
963
964To see exactly what a limiter is doing in real time, listen to the `"debug"` event. It contains detailed information about how the limiter is executing your code. Adding [job IDs](#job-options) to all your jobs makes the debug output more readable.
965
966When Bottleneck has to fail one of your jobs, it does so by using `BottleneckError` objects. This lets you tell those errors apart from your own code's errors:
967```js
968limiter.schedule(fn)
969.then((result) => { /* ... */ } )
970.catch((error) => {
971 if (error instanceof Bottleneck.BottleneckError) {
972 /* ... */
973 }
974});
975```
976
977## Upgrading to v2
978
979The internal algorithms essentially haven't changed from v1, but many small changes to the interface were made to introduce new features.
980
981All the breaking changes:
982- Bottleneck v2 requires Node 6+ or a modern browser. Use `require("bottleneck/es5")` if you need ES5 support in v2. Bottleneck v1 will continue to use ES5 only.
983- The Bottleneck constructor now takes an options object. See [Constructor](#constructor).
984- The `Cluster` feature is now called `Group`. This is to distinguish it from the new v2 [Clustering](#clustering) feature.
985- The `Group` constructor takes an options object to match the limiter constructor.
986- Jobs take an optional options object. See [Job options](#job-options).
987- Removed `submitPriority()`, use `submit()` with an options object instead.
988- Removed `schedulePriority()`, use `schedule()` with an options object instead.
989- The `rejectOnDrop` option is now `true` by default. It can be set to `false` if you wish to retain v1 behavior. However this option is left undocumented as enabling it is considered to be a poor practice.
990- Use `null` instead of `0` to indicate an unlimited `maxConcurrent` value.
991- Use `null` instead of `-1` to indicate an unlimited `highWater` value.
992- Renamed `changeSettings()` to `updateSettings()`, it now returns a promise to indicate completion. It takes the same options object as the constructor.
993- Renamed `nbQueued()` to `queued()`.
994- Renamed `nbRunning` to `running()`, it now returns its result using a promise.
995- Removed `isBlocked()`.
996- Changing the Promise library is now done through the options object like any other limiter setting.
997- Removed `changePenalty()`, it is now done through the options object like any other limiter setting.
998- Removed `changeReservoir()`, it is now done through the options object like any other limiter setting.
999- Removed `stopAll()`. Use the new `stop()` method.
1000- `check()` now accepts an optional `weight` argument, and returns its result using a promise.
1001- Removed the `Group` `changeTimeout()` method. Instead, pass a `timeout` option when creating a Group.
1002
1003Version 2 is more user-friendly and powerful.
1004
1005After upgrading your code, please take a minute to read the [Debugging your application](#debugging-your-application) chapter.
1006
1007
1008## Contributing
1009
1010This README is always in need of improvements. If wording can be clearer and simpler, please consider forking this repo and submitting a Pull Request, or simply opening an issue.
1011
1012Suggestions and bug reports are also welcome.
1013
1014To work on the Bottleneck code, simply clone the repo, makes your changes to the files located in `src/` only, then run `./scripts/build.sh && npm test` to ensure that everything is set up correctly.
1015
1016To speed up compilation time during development, run `./scripts/build.sh dev` instead. Make sure to build and test without `dev` before submitting a PR.
1017
1018The tests must also pass in Clustering mode and using the ES5 bundle. You'll need a Redis server running locally (latency needs to be minimal to run the tests). If the server isn't using the default hostname and port, you can set those in the `.env` file. Then run `./scripts/build.sh && npm run test-all`.
1019
1020All contributions are appreciated and will be considered.
1021
1022[license-url]: https://github.com/SGrondin/bottleneck/blob/master/LICENSE
1023
1024[npm-url]: https://www.npmjs.com/package/bottleneck
1025[npm-license]: https://img.shields.io/npm/l/bottleneck.svg?style=flat
1026[npm-version]: https://img.shields.io/npm/v/bottleneck.svg?style=flat
1027[npm-downloads]: https://img.shields.io/npm/dm/bottleneck.svg?style=flat