UNPKG

38.9 kBMarkdownView Raw
1# bottleneck
2
3[![Downloads][npm-downloads]][npm-url]
4[![version][npm-version]][npm-url]
5[![License][npm-license]][license-url]
6
7
8Bottleneck is a lightweight and efficient Task Scheduler and Rate Limiter for Node.js and the browser.
9
10Bottleneck is an easy solution as it adds very little complexity to your code. It is battle-hardened, reliable and production-ready and used on a large scale in private companies and open source software.
11
12It supports **Clustering**: it can rate limit jobs across multiple Node.js instances. It uses Redis and strictly atomic operations to stay reliable in the presence of unreliable clients and networks. It also supports *Redis Cluster* and *Redis Sentinel*.
13
14**[Upgrading from version 1?](#upgrading-to-v2)**
15
16<!-- toc -->
17
18- [Install](#install)
19- [Quick Start](#quick-start)
20 * [Gotchas](#gotchas)
21- [Constructor](#constructor)
22- [`submit()`](#submit)
23- [`schedule()`](#schedule)
24- [`wrap()`](#wrap)
25- [Job Options](#job-options)
26- [Jobs Lifecycle](#jobs-lifecycle)
27- [Events](#events)
28- [Retries](#retries)
29- [`updateSettings()`](#updatesettings)
30- [`incrementReservoir()`](#incrementreservoir)
31- [`currentReservoir()`](#currentreservoir)
32- [`stop()`](#stop)
33- [`chain()`](#chain)
34- [Group](#group)
35- [Batching](#batching)
36- [Clustering](#clustering)
37- [Debugging Your Application](#debugging-your-application)
38- [Upgrading To v2](#upgrading-to-v2)
39- [Contributing](#contributing)
40
41<!-- tocstop -->
42
43## Install
44
45```
46npm install --save bottleneck
47```
48
49```js
50import Bottleneck from "bottleneck";
51
52// Note: To support older browsers and Node <6.0, you must import the ES5 bundle instead.
53var Bottleneck = require("bottleneck/es5");
54```
55
56## Quick Start
57
58### Step 1 of 3
59
60Most APIs have a rate limit. For example, to execute 3 requests per second:
61```js
62const limiter = new Bottleneck({
63 minTime: 333
64});
65```
66
67If there's a chance some requests might take longer than 333ms and you want to prevent more than 1 request from running at a time, add `maxConcurrent: 1`:
68```js
69const limiter = new Bottleneck({
70 maxConcurrent: 1,
71 minTime: 333
72});
73```
74
75**Sometimes rate limits instead take the form of "X requests every Y seconds".** In this example, we throttle to 100 requests every 60 seconds:
76```js
77const limiter = new Bottleneck({
78 reservoir: 100, // initial value
79 reservoirRefreshAmount: 100,
80 reservoirRefreshInterval: 60 * 1000, // must be divisible by 250
81
82 // also use maxConcurrent and/or minTime for safety
83 maxConcurrent: 1,
84 minTime: 333
85});
86```
87`reservoir` is a counter decremented every time a job is launched, we set its initial value to 100. Then, every `reservoirRefreshInterval` (60000 ms), `reservoir` is automatically reset to `reservoirRefreshAmount` (100).
88
89**IMPORTANT:** For safety reasons, it's strongly recommended to also use `minTime` and/or `maxConcurrent` to spread out the load. For example, suppose a lot of jobs are queued up because the `reservoir` is 0. As soon as the reservoir refresh is triggered, 100 jobs will automatically be launched, all at the same time! To prevent that and keep your application running smoothly, use `minTime` and/or `maxConcurrent` to *stagger* the jobs.
90
91### Step 2 of 3
92
93#### ➤ Using callbacks?
94
95Instead of this:
96```js
97someAsyncCall(arg1, arg2, callback);
98```
99Do this:
100```js
101limiter.submit(someAsyncCall, arg1, arg2, callback);
102```
103
104#### ➤ Using promises?
105
106Instead of this:
107```js
108myFunction(arg1, arg2)
109.then((result) => {
110 /* handle result */
111});
112```
113Do this:
114```js
115limiter.schedule(() => myFunction(arg1, arg2))
116.then((result) => {
117 /* handle result */
118});
119```
120Or this:
121```js
122const wrapped = limiter.wrap(myFunction);
123
124wrapped(arg1, arg2)
125.then((result) => {
126 /* handle result */
127});
128```
129
130#### ➤ Using async/await?
131
132Instead of this:
133```js
134const result = await myFunction(arg1, arg2);
135```
136Do this:
137```js
138const result = await limiter.schedule(() => myFunction(arg1, arg2));
139```
140Or this:
141```js
142const wrapped = limiter.wrap(myFunction);
143
144const result = await wrapped(arg1, arg2);
145```
146
147### Step 3 of 3
148
149Remember...
150
151Bottleneck builds a queue of jobs and executes them as soon as possible. By default, the jobs will be executed in the order they were received.
152
153**Read the 'Gotchas' and you're good to go**. Or keep reading to learn about all the fine tuning and advanced options available. If your rate limits need to be enforced across a cluster of computers, read the [Clustering](#clustering) docs.
154
155[Need help debugging your application?](#debugging-your-application)
156
157Instead of throttling maybe [you want to batch up requests](#batching) into fewer calls?
158
159#### Gotchas
160
161* If you're passing an object's method as a job, you'll probably need to `bind()` the object:
162```js
163// instead of this:
164limiter.schedule(() => object.doSomething(arg1, arg2));
165// do this:
166limiter.schedule(() => object.doSomething.bind(object)(arg1, arg2));
167```
168
169* Bottleneck requires Node 6+ to function. However, an ES5 build is included: `var Bottleneck = require("bottleneck/es5");`.
170
171* Make sure you're catching `"error"` events emitted by your limiters!
172
173* Consider setting a `maxConcurrent` value instead of leaving it `null`. This can help your application's performance, especially if you think the limiter's queue might become very long.
174
175* **When using `submit()`**, if a callback isn't necessary, you must pass `null` or an empty function instead. It will not work otherwise.
176
177* **When using `submit()`**, make sure all the jobs will eventually complete by calling their callback, or set an [`expiration`](#job-options). Even if you submitted your job with a `null` callback , it still needs to call its callback. This is particularly important if you are using a `maxConcurrent` value that isn't `null` (unlimited), otherwise those not completed jobs will be clogging up the limiter and no new jobs will be allowed to run. It's safe to call the callback more than once, subsequent calls are ignored.
178
179## Docs
180
181### Constructor
182
183```js
184const limiter = new Bottleneck({/* options */});
185```
186
187Basic options:
188
189| Option | Default | Description |
190|--------|---------|-------------|
191| `maxConcurrent` | `null` (unlimited) | How many jobs can be executing at the same time. Consider setting a value instead of leaving it `null`, it can help your application's performance, especially if you think the limiter's queue might get very long. |
192| `minTime` | `0` ms | How long to wait after launching a job before launching another one. |
193| `highWater` | `null` (unlimited) | How long can the queue be? When the queue length exceeds that value, the selected `strategy` is executed to shed the load. |
194| `strategy` | `Bottleneck.strategy.LEAK` | Which strategy to use when the queue gets longer than the high water mark. [Read about strategies](#strategies). Strategies are never executed if `highWater` is `null`. |
195| `penalty` | `15 * minTime`, or `5000` when `minTime` is `0` | The `penalty` value used by the `BLOCK` strategy. |
196| `reservoir` | `null` (unlimited) | How many jobs can be executed before the limiter stops executing jobs. If `reservoir` reaches `0`, no jobs will be executed until it is no longer `0`. New jobs will still be queued up. |
197| `reservoirRefreshInterval` | `null` (disabled) | Every `reservoirRefreshInterval` milliseconds, the `reservoir` value will be automatically reset to `reservoirRefreshAmount`. The `reservoirRefreshInterval` value should be a [multiple of 250 (5000 for Clustering)](https://github.com/SGrondin/bottleneck/issues/88). |
198| `reservoirRefreshAmount` | `null` (disabled) | The value to reset `reservoir` to when `reservoirRefreshInterval` is in use. |
199| `Promise` | `Promise` (built-in) | This lets you override the Promise library used by Bottleneck. |
200
201
202### submit()
203
204Adds a job to the queue. This is the callback version of `schedule()`.
205```js
206limiter.submit(someAsyncCall, arg1, arg2, callback);
207```
208You can pass `null` instead of an empty function if there is no callback, but `someAsyncCall` still needs to call **its** callback to let the limiter know it has completed its work.
209
210`submit()` can also accept [advanced options](#job-options).
211
212### schedule()
213
214Adds a job to the queue. This is the Promise and async/await version of `submit()`.
215```js
216const fn = function(arg1, arg2) {
217 return httpGet(arg1, arg2); // Here httpGet() returns a promise
218};
219
220limiter.schedule(fn, arg1, arg2)
221.then((result) => {
222 /* ... */
223});
224```
225In other words, `schedule()` takes a function **fn** and a list of arguments. `schedule()` returns a promise that will be executed according to the rate limits.
226
227`schedule()` can also accept [advanced options](#job-options).
228
229Here's another example:
230```js
231// suppose that `client.get(url)` returns a promise
232
233const url = "https://wikipedia.org";
234
235limiter.schedule(() => client.get(url))
236.then(response => console.log(response.body));
237```
238
239### wrap()
240
241Takes a function that returns a promise. Returns a function identical to the original, but rate limited.
242```js
243const wrapped = limiter.wrap(fn);
244
245wrapped()
246.then(function (result) {
247 /* ... */
248})
249.catch(function (error) {
250 // Bottleneck might need to fail the job even if the original function can never fail.
251 // For example, your job is taking longer than the `expiration` time you've set.
252});
253```
254
255### Job Options
256
257`submit()`, `schedule()`, and `wrap()` all accept advanced options.
258```js
259// Submit
260limiter.submit({/* options */}, someAsyncCall, arg1, arg2, callback);
261
262// Schedule
263limiter.schedule({/* options */}, fn, arg1, arg2);
264
265// Wrap
266const wrapped = limiter.wrap(fn);
267wrapped.withOptions({/* options */}, arg1, arg2);
268```
269
270| Option | Default | Description |
271|--------|---------|-------------|
272| `priority` | `5` | A priority between `0` and `9`. A job with a priority of `4` will be queued ahead of a job with a priority of `5`. **Important:** You must set a low `maxConcurrent` value for priorities to work, otherwise there is nothing to queue because jobs will be be scheduled immediately! |
273| `weight` | `1` | Must be an integer equal to or higher than `0`. The `weight` is what increases the number of running jobs (up to `maxConcurrent`) and decreases the `reservoir` value. |
274| `expiration` | `null` (unlimited) | The number of milliseconds a job is given to complete. Jobs that execute for longer than `expiration` ms will be failed with a `BottleneckError`. |
275| `id` | `<no-id>` | You should give an ID to your jobs, it helps with [debugging](#debugging-your-application). |
276
277### Strategies
278
279A strategy is a simple algorithm that is executed every time adding a job would cause the number of queued jobs to exceed `highWater`. Strategies are never executed if `highWater` is `null`.
280
281#### Bottleneck.strategy.LEAK
282When adding a new job to a limiter, if the queue length reaches `highWater`, drop the oldest job with the lowest priority. This is useful when jobs that have been waiting for too long are not important anymore. If all the queued jobs are more important (based on their `priority` value) than the one being added, it will not be added.
283
284#### Bottleneck.strategy.OVERFLOW_PRIORITY
285Same as `LEAK`, except it will only drop jobs that are *less important* than the one being added. If all the queued jobs are as or more important than the new one, it will not be added.
286
287#### Bottleneck.strategy.OVERFLOW
288When adding a new job to a limiter, if the queue length reaches `highWater`, do not add the new job. This strategy totally ignores priority levels.
289
290#### Bottleneck.strategy.BLOCK
291When adding a new job to a limiter, if the queue length reaches `highWater`, the limiter falls into "blocked mode". All queued jobs are dropped and no new jobs will be accepted until the limiter unblocks. It will unblock after `penalty` milliseconds have passed without receiving a new job. `penalty` is equal to `15 * minTime` (or `5000` if `minTime` is `0`) by default. This strategy is ideal when bruteforce attacks are to be expected. This strategy totally ignores priority levels.
292
293
294### Jobs lifecycle
295
2961. **Received**. You new job has been added to your limiter. Bottleneck needs to check whether if can be accepted into the queue.
2972. **Queued**. Bottleneck has accepted your job, but it can not tell at what exact timestamp it will run yet, because it is dependent on previous jobs.
2983. **Running**. Your job is not in the queue anymore, it will be executed after a delay that was computed according to your `minTime` setting.
2994. **Executing**. Your job is executing its code.
3005. **Done**. Your job has completed.
301
302**Note:** By default, Bottleneck does not keep track of DONE jobs, to save memory. You can enable this feature by passing `trackDoneStatus: true` as an option when creating a limiter.
303
304#### counts()
305
306```js
307const counts = limiter.counts();
308
309console.log(counts);
310/*
311{
312 RECEIVED: 0,
313 QUEUED: 0,
314 RUNNING: 0,
315 EXECUTING: 0,
316 DONE: 0
317}
318*/
319```
320
321Returns an object with the current number of jobs per status in the limiter.
322
323#### jobStatus()
324
325```js
326console.log(limiter.jobStatus("some-job-id"));
327// Example: QUEUED
328```
329
330Returns the status of the job with the provided job id **in the limiter**. Returns `null` if no job with that id exist.
331
332#### jobs()
333
334```js
335console.log(limiter.jobs("RUNNING"));
336// Example: ['id1', 'id2']
337```
338
339Returns an array of all the job ids with the specified status **in the limiter**. Not passing a status string returns all the known ids.
340
341#### queued()
342
343```js
344const count = limiter.queued(priority);
345
346console.log(count);
347```
348
349`priority` is optional. Returns the number of `QUEUED` jobs with the given `priority` level. Omitting the `priority` argument returns the total number of queued jobs **in the limiter**.
350
351#### empty()
352
353```js
354if (limiter.empty()) {
355 // do something...
356}
357```
358
359Returns a boolean which indicates whether there are any `RECEIVED` or `QUEUED` jobs **in the limiter**.
360
361#### running()
362
363```js
364limiter.running()
365.then((count) => console.log(count));
366```
367
368Returns a promise that returns the **total weight** of the `RUNNING` and `EXECUTING` jobs **in the Cluster**.
369
370#### done()
371
372```js
373limiter.done()
374.then((count) => console.log(count));
375```
376
377Returns a promise that returns the **total weight** of `DONE` jobs **in the Cluster**. Does not require passing the `trackDoneStatus: true` option.
378
379#### check()
380
381```js
382limiter.check()
383.then((wouldRunNow) => console.log(wouldRunNow));
384```
385Checks if a new job would be executed immediately if it was submitted now. Returns a promise that returns a boolean.
386
387
388### Events
389
390Event names: `"error"`, `"empty"`, `"idle"`, `"dropped"`, `"depleted"` and `"debug"`.
391
392__'error'__
393```js
394limiter.on("error", function (error) {
395 /* handle errors here */
396});
397```
398
399The two main causes of error events are: uncaught exceptions in your event handlers, and network errors when Clustering is enabled.
400
401__'failed'__
402```js
403limiter.on("failed", function (error, jobInfo) {
404 // This will be called every time a job fails.
405});
406```
407
408__'retry'__
409
410See [Retries](#retries) to learn how to automatically retry jobs.
411```js
412limiter.on("retry", function (error, jobInfo) {
413 // This will be called every time a job is retried.
414});
415```
416
417__'empty'__
418```js
419limiter.on("empty", function () {
420 // This will be called when `limiter.empty()` becomes true.
421});
422```
423
424__'idle'__
425```js
426limiter.on("idle", function () {
427 // This will be called when `limiter.empty()` is `true` and `limiter.running()` is `0`.
428});
429```
430
431__'dropped'__
432```js
433limiter.on("dropped", function (dropped) {
434 // This will be called when a strategy was triggered.
435 // The dropped request is passed to this event listener.
436});
437```
438
439__'depleted'__
440```js
441limiter.on("depleted", function (empty) {
442 // This will be called every time the reservoir drops to 0.
443 // The `empty` (boolean) argument indicates whether `limiter.empty()` is currently true.
444});
445```
446
447__'debug'__
448```js
449limiter.on("debug", function (message, data) {
450 // Useful to figure out what the limiter is doing in real time
451 // and to help debug your application
452});
453```
454
455Use `removeAllListeners()` with an optional event name as first argument to remove listeners.
456
457Use `.once()` instead of `.on()` to only receive a single event.
458
459
460### Retries
461
462The following example:
463```js
464const limiter = new Bottleneck();
465
466// Listen to the "failed" event
467limiter.on("failed", async (error, jobInfo) => {
468 const id = jobInfo.options.id;
469 console.warn(`Job ${id} failed: ${error}`);
470
471 if (jobInfo.retryCount === 0) { // Here we only retry once
472 console.log(`Retrying job ${id} in 25ms!`);
473 return 25;
474 }
475});
476
477// Listen to the "retry" event
478limiter.on("retry", (error, jobInfo) => console.log(`Now retrying ${jobInfo.options.id}`));
479
480const main = async function () {
481 let executions = 0;
482
483 // Schedule one job
484 const result = await limiter.schedule({ id: 'ABC123' }, async () => {
485 executions++;
486 if (executions === 1) {
487 throw new Error("Boom!");
488 } else {
489 return "Success!";
490 }
491 });
492
493 console.log(`Result: ${result}`);
494}
495
496main();
497```
498will output
499```
500Job ABC123 failed: Error: Boom!
501Retrying job ABC123 in 25ms!
502Now retrying ABC123
503Result: Success!
504```
505To re-run your job, simply return an integer from the `'failed'` event handler. The number returned is how many milliseconds to wait before retrying it. Return `0` to retry it immediately.
506
507**IMPORTANT:** When you ask the limiter to retry a job it will not send it back into the queue. It will stay in the `EXECUTING` [state](#jobs-lifecycle) until it succeeds or until you stop retrying it. **This means that it counts as a concurrent job for `maxConcurrent` even while it's just waiting to be retried.** The number of milliseconds to wait ignores your `minTime` settings.
508
509
510### updateSettings()
511
512```js
513limiter.updateSettings(options);
514```
515The options are the same as the [limiter constructor](#constructor).
516
517**Note:** Changes don't affect `SCHEDULED` jobs.
518
519### incrementReservoir()
520
521```js
522limiter.incrementReservoir(incrementBy);
523```
524Returns a promise that returns the new reservoir value.
525
526### currentReservoir()
527
528```js
529limiter.currentReservoir()
530.then((reservoir) => console.log(reservoir));
531```
532Returns a promise that returns the current reservoir value.
533
534### stop()
535
536The `stop()` method is used to safely shutdown a limiter. It prevents any new jobs from being added to the limiter and waits for all `EXECUTING` jobs to complete.
537
538```js
539limiter.stop(options)
540.then(() => {
541 console.log("Shutdown completed!")
542});
543```
544
545`stop()` returns a promise that resolves once all the `EXECUTING` jobs have completed and, if desired, once all non-`EXECUTING` jobs have been dropped.
546
547| Option | Default | Description |
548|--------|---------|-------------|
549| `dropWaitingJobs` | `true` | When `true`, drop all the `RECEIVED`, `QUEUED` and `RUNNING` jobs. When `false`, allow those jobs to complete before resolving the Promise returned by this method. |
550| `dropErrorMessage` | `This limiter has been stopped.` | The error message used to drop jobs when `dropWaitingJobs` is `true`. |
551| `enqueueErrorMessage` | `This limiter has been stopped and cannot accept new jobs.` | The error message used to reject a job added to the limiter after `stop()` has been called. |
552
553### chain()
554
555Tasks that are ready to be executed will be added to that other limiter. Suppose you have 2 types of tasks, A and B. They both have their own limiter with their own settings, but both must also follow a global limiter G:
556```js
557const limiterA = new Bottleneck( /* some settings */ );
558const limiterB = new Bottleneck( /* some different settings */ );
559const limiterG = new Bottleneck( /* some global settings */ );
560
561limiterA.chain(limiterG);
562limiterB.chain(limiterG);
563
564// Requests added to limiterA must follow the A and G rate limits.
565// Requests added to limiterB must follow the B and G rate limits.
566// Requests added to limiterG must follow the G rate limits.
567```
568
569To unchain, call `limiter.chain(null);`.
570
571## Group
572
573The `Group` feature of Bottleneck manages many limiters automatically for you. It creates limiters dynamically and transparently.
574
575Let's take a DNS server as an example of how Bottleneck can be used. It's a service that sees a lot of abuse and where incoming DNS requests need to be rate limited. Bottleneck is so tiny, it's acceptable to create one limiter for each origin IP, even if it means creating thousands of limiters. The `Group` feature is perfect for this use case. Create one Group and use the origin IP to rate limit each IP independently. Each call with the same key (IP) will be routed to the same underlying limiter. A Group is created like a limiter:
576
577
578```js
579const group = new Bottleneck.Group(options);
580```
581
582The `options` object will be used for every limiter created by the Group.
583
584The Group is then used with the `.key(str)` method:
585
586```js
587// In this example, the key is an IP
588group.key("77.66.54.32").submit(someAsyncCall, arg1, arg2, cb);
589```
590
591#### key()
592
593* `str` : The key to use. All jobs added with the same key will use the same underlying limiter. *Default: `""`*
594
595The return value of `.key(str)` is a limiter. If it doesn't already exist, it is generated for you. Calling `key()` is how limiters are created inside a Group.
596
597Limiters that have been idle for longer than 5 minutes are deleted to avoid memory leaks, this value can be changed by passing a different `timeout` option, in milliseconds.
598
599#### on("created")
600
601```js
602group.on("created", (limiter, key) => {
603 console.log("A new limiter was created for key: " + key)
604
605 // Prepare the limiter, for example we'll want to listen to its "error" events!
606 limiter.on("error", (err) => {
607 // Handle errors here
608 })
609});
610```
611
612Listening for the `"created"` event is the recommended way to set up a new limiter. Your event handler is executed before `key()` returns the newly created limiter.
613
614#### updateSettings()
615
616```js
617const group = new Bottleneck.Group({ maxConcurrent: 2, minTime: 250 });
618group.updateSettings({ minTime: 500 });
619```
620After executing the above commands, **new limiters** will be created with `{ maxConcurrent: 2, minTime: 500 }`.
621
622
623#### deleteKey()
624
625* `str`: The key for the limiter to delete.
626
627Manually deletes the limiter at the specified key. When using Clustering, the Redis data is immediately deleted and the other Groups in the Cluster will eventually delete their local key automatically, unless it is still being used.
628
629#### keys()
630
631Returns an array containing all the keys in the Group.
632
633#### clusterKeys()
634
635Same as `group.keys()`, but returns all keys in this Group ID across the Cluster.
636
637#### limiters()
638
639```js
640const limiters = group.limiters();
641
642console.log(limiters);
643// [ { key: "some key", limiter: <limiter> }, { key: "some other key", limiter: <some other limiter> } ]
644```
645
646## Batching
647
648Some APIs can accept multiple operations in a single call. Bottleneck's Batching feature helps you take advantage of those APIs:
649```js
650const batcher = new Bottleneck.Batcher({
651 maxTime: 1000,
652 maxSize: 10
653});
654
655batcher.on("batch", (batch) => {
656 console.log(batch); // ["some-data", "some-other-data"]
657
658 // Handle batch here
659});
660
661batcher.add("some-data");
662batcher.add("some-other-data");
663```
664
665`batcher.add()` returns a Promise that resolves once the request has been flushed to a `"batch"` event.
666
667| Option | Default | Description |
668|--------|---------|-------------|
669| `maxTime` | `null` (unlimited) | Maximum acceptable time (in milliseconds) a request can have to wait before being flushed to the `"batch"` event. |
670| `maxSize` | `null` (unlimited) | Maximum number of requests in a batch. |
671
672Batching doesn't throttle requests, it only groups them up optimally according to your `maxTime` and `maxSize` settings.
673
674## Clustering
675
676Clustering lets many limiters access the same shared state, stored in Redis. Changes to the state are Atomic, Consistent and Isolated (and fully [ACID](https://en.wikipedia.org/wiki/ACID) with the right [Durability](https://redis.io/topics/persistence) configuration), to eliminate any chances of race conditions or state corruption. Your settings, such as `maxConcurrent`, `minTime`, etc., are shared across the whole cluster, which means —for example— that `{ maxConcurrent: 5 }` guarantees no more than 5 jobs can ever run at a time in the entire cluster of limiters. 100% of Bottleneck's features are supported in Clustering mode. Enabling Clustering is as simple as changing a few settings. It's also a convenient way to store or export state for later use.
677
678Bottleneck will attempt to spread load evenly across limiters.
679
680### Enabling Clustering
681
682First, add `redis` or `ioredis` to your application's dependencies:
683```bash
684# NodeRedis (https://github.com/NodeRedis/node_redis)
685npm install --save redis
686
687# or ioredis (https://github.com/luin/ioredis)
688npm install --save ioredis
689```
690Then create a limiter or a Group:
691```js
692const limiter = new Bottleneck({
693 /* Some basic options */
694 maxConcurrent: 5,
695 minTime: 500
696 id: "my-super-app" // All limiters with the same id will be clustered together
697
698 /* Clustering options */
699 datastore: "redis", // or "ioredis"
700 clearDatastore: false,
701 clientOptions: {
702 host: "127.0.0.1",
703 port: 6379
704
705 // Redis client options
706 // Using NodeRedis? See https://github.com/NodeRedis/node_redis#options-object-properties
707 // Using ioredis? See https://github.com/luin/ioredis/blob/master/API.md#new-redisport-host-options
708 }
709});
710```
711
712| Option | Default | Description |
713|--------|---------|-------------|
714| `datastore` | `"local"` | Where the limiter stores its internal state. The default (`"local"`) keeps the state in the limiter itself. Set it to `"redis"` or `"ioredis"` to enable Clustering. |
715| `clearDatastore` | `false` | When set to `true`, on initial startup, the limiter will wipe any existing Bottleneck state data on the Redis db. |
716| `clientOptions` | `{}` | This object is passed directly to the redis client library you've selected. |
717| `clusterNodes` | `null` | **ioredis only.** When `clusterNodes` is not null, the client will be instantiated by calling `new Redis.Cluster(clusterNodes, clientOptions)` instead of `new Redis(clientOptions)`. |
718| `timeout` | `null` (no TTL) | The Redis TTL in milliseconds ([TTL](https://redis.io/commands/ttl)) for the keys created by the limiter. When `timeout` is set, the limiter's state will be automatically removed from Redis after `timeout` milliseconds of inactivity. |
719
720**Note: When using Groups**, the `timeout` option has a default of `300000` milliseconds and the generated limiters automatically receive an `id` with the pattern `${group.id}-${KEY}`.
721
722### Important considerations when Clustering
723
724The first limiter connecting to Redis will store its [constructor options](#constructor) on Redis and all subsequent limiters will be using those settings. You can alter the constructor options used by all the connected limiters by calling `updateSettings()`. The `clearDatastore` option instructs a new limiter to wipe any previous Bottleneck data (for that `id`), including previously stored settings.
725
726Queued jobs are **NOT** stored on Redis. They are local to each limiter. Exiting the Node.js process will lose those jobs. This is because Bottleneck has no way to propagate the JS code to run a job across a different Node.js process than the one it originated on. Bottleneck doesn't keep track of the queue contents of the limiters on a cluster for performance and reliability reasons. You can use something like [`BeeQueue`](https://github.com/bee-queue/bee-queue) in addition to Bottleneck to get around this limitation.
727
728Due to the above, functionality relying on the queue length happens purely locally:
729- Priorities are local. A higher priority job will run before a lower priority job **on the same limiter**. Another limiter on the cluster might run a lower priority job before our higher priority one.
730- Assuming constant priority levels, Bottleneck guarantees that jobs will be run in the order they were received **on the same limiter**. Another limiter on the cluster might run a job received later before ours runs.
731- `highWater` and load shedding ([strategies](#strategies)) are per limiter. However, one limiter entering Blocked mode will put the entire cluster in Blocked mode until `penalty` milliseconds have passed. See [Strategies](#strategies).
732- The `"empty"` event is triggered when the (local) queue is empty.
733- The `"idle"` event is triggered when the (local) queue is empty *and* no jobs are currently running anywhere in the cluster.
734
735You must work around these limitations in your application code if they are an issue to you. The `publish()` method could be useful here.
736
737The current design guarantees reliability, is highly performant and lets limiters come and go. Your application can scale up or down, and clients can be disconnected at any time without issues.
738
739It is **strongly recommended** that you give an `id` to every limiter and Group since it is used to build the name of your limiter's Redis keys! Limiters with the same `id` inside the same Redis db will be sharing the same datastore.
740
741It is **strongly recommended** that you set an `expiration` (See [Job Options](#job-options)) *on every job*, since that lets the cluster recover from crashed or disconnected clients. Otherwise, a client crashing while executing a job would not be able to tell the cluster to decrease its number of "running" jobs. By using expirations, those lost jobs are automatically cleared after the specified time has passed. Using expirations is essential to keeping a cluster reliable in the face of unpredictable application bugs, network hiccups, and so on.
742
743Network latency between Node.js and Redis is not taken into account when calculating timings (such as `minTime`). To minimize the impact of latency, Bottleneck performs the absolute minimum number of state accesses. Keeping the Redis server close to your limiters will help you get a more consistent experience. Keeping the system time consistent across all clients will also help.
744
745It is **strongly recommended** to [set up an `"error"` listener](#events) on all your limiters and on your Groups.
746
747### Clustering Methods
748
749The `ready()`, `publish()` and `clients()` methods also exist when using the `local` datastore, for code compatibility reasons: code written for `redis`/`ioredis` won't break with `local`.
750
751#### ready()
752
753This method returns a promise that resolves once the limiter is connected to Redis.
754
755As of v2.9.0, it's no longer necessary to wait for `.ready()` to resolve before issuing commands to a limiter. The commands will be queued until the limiter successfully connects. Make sure to listen to the `"error"` event to handle connection errors.
756
757```js
758const limiter = new Bottleneck({/* options */});
759
760limiter.on("error", (err) => {
761 // handle network errors
762});
763
764limiter.ready()
765.then(() => {
766 // The limiter is ready
767});
768```
769
770#### publish(message)
771
772This method broadcasts the `message` string to every limiter in the Cluster. It returns a promise.
773```js
774const limiter = new Bottleneck({/* options */});
775
776limiter.on("message", (msg) => {
777 console.log(msg); // prints "this is a string"
778});
779
780limiter.publish("this is a string");
781```
782
783To send objects, stringify them first:
784```js
785limiter.on("message", (msg) => {
786 console.log(JSON.parse(msg).hello) // prints "world"
787});
788
789limiter.publish(JSON.stringify({ hello: "world" }));
790```
791
792#### clients()
793
794If you need direct access to the redis clients, use `.clients()`:
795```js
796console.log(limiter.clients());
797// { client: <Redis Client>, subscriber: <Redis Client> }
798```
799
800### Additional Clustering information
801
802- Bottleneck is compatible with [Redis Clusters](https://redis.io/topics/cluster-tutorial), but you must use the `ioredis` datastore and the `clusterNodes` option.
803- Bottleneck is compatible with Redis Sentinel, but you must use the `ioredis` datastore.
804- Bottleneck's data is stored in Redis keys starting with `b_`. It also uses pubsub channels starting with `b_` It will not interfere with any other data stored on the server.
805- Bottleneck loads a few Lua scripts on the Redis server using the `SCRIPT LOAD` command. These scripts only take up a few Kb of memory. Running the `SCRIPT FLUSH` command will cause any connected limiters to experience critical errors until a new limiter connects to Redis and loads the scripts again.
806- The Lua scripts are highly optimized and designed to use as few resources as possible.
807
808### Managing Redis Connections
809
810Bottleneck needs to create 2 Redis Clients to function, one for normal operations and one for pubsub subscriptions. These 2 clients are kept in a `Bottleneck.RedisConnection` (NodeRedis) or a `Bottleneck.IORedisConnection` (ioredis) object, referred to as the Connection object.
811
812By default, every Group and every standalone limiter (a limiter not created by a Group) will create their own Connection object, but it is possible to manually control this behavior. In this example, every Group and limiter is sharing the same Connection object and therefore the same 2 clients:
813```js
814const connection = new Bottleneck.RedisConnection({
815 clientOptions: {/* NodeRedis/ioredis options */}
816 // ioredis also accepts `clusterNodes` here
817});
818
819
820const limiter = new Bottleneck({ connection: connection });
821const group = new Bottleneck.Group({ connection: connection });
822```
823You can access and reuse the Connection object of any Group or limiter:
824```js
825const group = new Bottleneck.Group({ connection: limiter.connection });
826```
827When a Connection object is created manually, the connectivity `"error"` events are emitted on the Connection itself.
828```js
829connection.on("error", (err) => { /* handle connectivity errors here */ });
830```
831If you already have a NodeRedis/ioredis client, you can ask Bottleneck to reuse it, although currently the Connection object will still create a second client for pubsub operations:
832```js
833import Redis from "redis";
834const client = new Redis.createClient({/* options */});
835
836const connection = new Bottleneck.RedisConnection({
837 // `clientOptions` and `clusterNodes` will be ignored since we're passing a raw client
838 client: client
839});
840
841const limiter = new Bottleneck({ connection: connection });
842const group = new Bottleneck.Group({ connection: connection });
843```
844Depending on your application, using more clients can improve performance.
845
846Use the `disconnect(flush)` method to close the Redis clients.
847```js
848limiter.disconnect();
849group.disconnect();
850```
851If you created the Connection object manually, you need to call `connection.disconnect()` instead, for safety reasons.
852
853## Debugging your application
854
855Debugging complex scheduling logic can be difficult, especially when priorities, weights, and network latency all interact with one another.
856
857If your application is not behaving as expected, start by making sure you're catching `"error"` [events emitted](#events) by your limiters and your Groups. Those errors are most likely uncaught exceptions from your application code.
858
859Make sure you've read the ['Gotchas'](#gotchas) section.
860
861To see exactly what a limiter is doing in real time, listen to the `"debug"` event. It contains detailed information about how the limiter is executing your code. Adding [job IDs](#job-options) to all your jobs makes the debug output more readable.
862
863When Bottleneck has to fail one of your jobs, it does so by using `BottleneckError` objects. This lets you tell those errors apart from your own code's errors:
864```js
865limiter.schedule(fn)
866.then((result) => { /* ... */ } )
867.catch((error) => {
868 if (error instanceof Bottleneck.BottleneckError) {
869 /* ... */
870 }
871});
872```
873
874## Upgrading to v2
875
876The internal algorithms essentially haven't changed from v1, but many small changes to the interface were made to introduce new features.
877
878All the breaking changes:
879- Bottleneck v2 requires Node 6+ or a modern browser. Use `require("bottleneck/es5")` if you need ES5 support in v2. Bottleneck v1 will continue to use ES5 only.
880- The Bottleneck constructor now takes an options object. See [Constructor](#constructor).
881- The `Cluster` feature is now called `Group`. This is to distinguish it from the new v2 [Clustering](#clustering) feature.
882- The `Group` constructor takes an options object to match the limiter constructor.
883- Jobs take an optional options object. See [Job options](#job-options).
884- Removed `submitPriority()`, use `submit()` with an options object instead.
885- Removed `schedulePriority()`, use `schedule()` with an options object instead.
886- The `rejectOnDrop` option is now `true` by default. It can be set to `false` if you wish to retain v1 behavior. However this option is left undocumented as enabling it is considered to be a poor practice.
887- Use `null` instead of `0` to indicate an unlimited `maxConcurrent` value.
888- Use `null` instead of `-1` to indicate an unlimited `highWater` value.
889- Renamed `changeSettings()` to `updateSettings()`, it now returns a promise to indicate completion. It takes the same options object as the constructor.
890- Renamed `nbQueued()` to `queued()`.
891- Renamed `nbRunning` to `running()`, it now returns its result using a promise.
892- Removed `isBlocked()`.
893- Changing the Promise library is now done through the options object like any other limiter setting.
894- Removed `changePenalty()`, it is now done through the options object like any other limiter setting.
895- Removed `changeReservoir()`, it is now done through the options object like any other limiter setting.
896- Removed `stopAll()`. Use the new `stop()` method.
897- `check()` now accepts an optional `weight` argument, and returns its result using a promise.
898- Removed the `Group` `changeTimeout()` method. Instead, pass a `timeout` option when creating a Group.
899
900Version 2 is more user-friendly and powerful.
901
902After upgrading your code, please take a minute to read the [Debugging your application](#debugging-your-application) chapter.
903
904
905## Contributing
906
907This README is always in need of improvements. If wording can be clearer and simpler, please consider forking this repo and submitting a Pull Request, or simply opening an issue.
908
909Suggestions and bug reports are also welcome.
910
911To work on the Bottleneck code, simply clone the repo, makes your changes to the files located in `src/` only, then run `./scripts/build.sh && npm test` to ensure that everything is set up correctly.
912
913To speed up compilation time during development, run `./scripts/build.sh dev` instead. Make sure to build and test without `dev` before submitting a PR.
914
915The tests must also pass in Clustering mode and using the ES5 bundle. You'll need a Redis server running on `127.0.0.1:6379`, then run `./scripts/build.sh && npm run test-all`.
916
917All contributions are appreciated and will be considered.
918
919[license-url]: https://github.com/SGrondin/bottleneck/blob/master/LICENSE
920
921[npm-url]: https://www.npmjs.com/package/bottleneck
922[npm-license]: https://img.shields.io/npm/l/bottleneck.svg?style=flat
923[npm-version]: https://img.shields.io/npm/v/bottleneck.svg?style=flat
924[npm-downloads]: https://img.shields.io/npm/dm/bottleneck.svg?style=flat