1 | # lru-cache
|
2 |
|
3 | A cache object that deletes the least-recently-used items.
|
4 |
|
5 | Specify a max number of the most recently used items that you
|
6 | want to keep, and this cache will keep that many of the most
|
7 | recently accessed items.
|
8 |
|
9 | This is not primarily a TTL cache, and does not make strong TTL
|
10 | guarantees. There is no preemptive pruning of expired items by
|
11 | default, but you _may_ set a TTL on the cache or on a single
|
12 | `set`. If you do so, it will treat expired items as missing, and
|
13 | delete them when fetched. If you are more interested in TTL
|
14 | caching than LRU caching, check out
|
15 | [@isaacs/ttlcache](http://npm.im/@isaacs/ttlcache).
|
16 |
|
17 | As of version 7, this is one of the most performant LRU
|
18 | implementations available in JavaScript, and supports a wide
|
19 | diversity of use cases. However, note that using some of the
|
20 | features will necessarily impact performance, by causing the
|
21 | cache to have to do more work. See the "Performance" section
|
22 | below.
|
23 |
|
24 | ## Installation
|
25 |
|
26 | ```bash
|
27 | npm install lru-cache --save
|
28 | ```
|
29 |
|
30 | ## Usage
|
31 |
|
32 | ```js
|
33 | // hybrid module, either works
|
34 | import { LRUCache } from 'lru-cache'
|
35 | // or:
|
36 | const { LRUCache } = require('lru-cache')
|
37 | // or in minified form for web browsers:
|
38 | import { LRUCache } from 'http://unpkg.com/lru-cache@9/dist/mjs/index.min.mjs'
|
39 |
|
40 | // At least one of 'max', 'ttl', or 'maxSize' is required, to prevent
|
41 | // unsafe unbounded storage.
|
42 | //
|
43 | // In most cases, it's best to specify a max for performance, so all
|
44 | // the required memory allocation is done up-front.
|
45 | //
|
46 | // All the other options are optional, see the sections below for
|
47 | // documentation on what each one does. Most of them can be
|
48 | // overridden for specific items in get()/set()
|
49 | const options = {
|
50 | max: 500,
|
51 |
|
52 | // for use with tracking overall storage size
|
53 | maxSize: 5000,
|
54 | sizeCalculation: (value, key) => {
|
55 | return 1
|
56 | },
|
57 |
|
58 | // for use when you need to clean up something when objects
|
59 | // are evicted from the cache
|
60 | dispose: (value, key) => {
|
61 | freeFromMemoryOrWhatever(value)
|
62 | },
|
63 |
|
64 | // how long to live in ms
|
65 | ttl: 1000 * 60 * 5,
|
66 |
|
67 | // return stale items before removing from cache?
|
68 | allowStale: false,
|
69 |
|
70 | updateAgeOnGet: false,
|
71 | updateAgeOnHas: false,
|
72 |
|
73 | // async method to use for cache.fetch(), for
|
74 | // stale-while-revalidate type of behavior
|
75 | fetchMethod: async (
|
76 | key,
|
77 | staleValue,
|
78 | { options, signal, context }
|
79 | ) => {},
|
80 | }
|
81 |
|
82 | const cache = new LRUCache(options)
|
83 |
|
84 | cache.set('key', 'value')
|
85 | cache.get('key') // "value"
|
86 |
|
87 | // non-string keys ARE fully supported
|
88 | // but note that it must be THE SAME object, not
|
89 | // just a JSON-equivalent object.
|
90 | var someObject = { a: 1 }
|
91 | cache.set(someObject, 'a value')
|
92 | // Object keys are not toString()-ed
|
93 | cache.set('[object Object]', 'a different value')
|
94 | assert.equal(cache.get(someObject), 'a value')
|
95 | // A similar object with same keys/values won't work,
|
96 | // because it's a different object identity
|
97 | assert.equal(cache.get({ a: 1 }), undefined)
|
98 |
|
99 | cache.clear() // empty the cache
|
100 | ```
|
101 |
|
102 | If you put more stuff in the cache, then less recently used items
|
103 | will fall out. That's what an LRU cache is.
|
104 |
|
105 | ## `class LRUCache<K, V, FC = unknown>(options)`
|
106 |
|
107 | Create a new `LRUCache` object.
|
108 |
|
109 | When using TypeScript, set the `K` and `V` types to the `key` and
|
110 | `value` types, respectively.
|
111 |
|
112 | The `FC` ("fetch context") generic type defaults to `unknown`.
|
113 | If set to a value other than `void` or `undefined`, then any
|
114 | calls to `cache.fetch()` _must_ provide a `context` option
|
115 | matching the `FC` type. If `FC` is set to `void` or `undefined`,
|
116 | then `cache.fetch()` _must not_ provide a `context` option. See
|
117 | the documentation on `async fetch()` below.
|
118 |
|
119 | ## Options
|
120 |
|
121 | All options are available on the LRUCache instance, making it
|
122 | safe to pass an LRUCache instance as the options argument to make
|
123 | another empty cache of the same type.
|
124 |
|
125 | Some options are marked read-only because changing them after
|
126 | instantiation is not safe. Changing any of the other options
|
127 | will of course only have an effect on subsequent method calls.
|
128 |
|
129 | ### `max` (read only)
|
130 |
|
131 | The maximum number of items that remain in the cache (assuming no
|
132 | TTL pruning or explicit deletions). Note that fewer items may be
|
133 | stored if size calculation is used, and `maxSize` is exceeded.
|
134 | This must be a positive finite intger.
|
135 |
|
136 | At least one of `max`, `maxSize`, or `TTL` is required. This
|
137 | must be a positive integer if set.
|
138 |
|
139 | **It is strongly recommended to set a `max` to prevent unbounded
|
140 | growth of the cache.** See "Storage Bounds Safety" below.
|
141 |
|
142 | ### `maxSize` (read only)
|
143 |
|
144 | Set to a positive integer to track the sizes of items added to
|
145 | the cache, and automatically evict items in order to stay below
|
146 | this size. Note that this may result in fewer than `max` items
|
147 | being stored.
|
148 |
|
149 | Attempting to add an item to the cache whose calculated size is
|
150 | greater that this amount will be a no-op. The item will not be
|
151 | cached, and no other items will be evicted.
|
152 |
|
153 | Optional, must be a positive integer if provided.
|
154 |
|
155 | Sets `maxEntrySize` to the same value, unless a different value
|
156 | is provided for `maxEntrySize`.
|
157 |
|
158 | At least one of `max`, `maxSize`, or `TTL` is required. This
|
159 | must be a positive integer if set.
|
160 |
|
161 | Even if size tracking is enabled, **it is strongly recommended to
|
162 | set a `max` to prevent unbounded growth of the cache.** See
|
163 | "Storage Bounds Safety" below.
|
164 |
|
165 | ### `maxEntrySize`
|
166 |
|
167 | Set to a positive integer to track the sizes of items added to
|
168 | the cache, and prevent caching any item over a given size.
|
169 | Attempting to add an item whose calculated size is greater than
|
170 | this amount will be a no-op. The item will not be cached, and no
|
171 | other items will be evicted.
|
172 |
|
173 | Optional, must be a positive integer if provided. Defaults to
|
174 | the value of `maxSize` if provided.
|
175 |
|
176 | ### `sizeCalculation`
|
177 |
|
178 | Function used to calculate the size of stored items. If you're
|
179 | storing strings or buffers, then you probably want to do
|
180 | something like `n => n.length`. The item is passed as the first
|
181 | argument, and the key is passed as the second argument.
|
182 |
|
183 | This may be overridden by passing an options object to
|
184 | `cache.set()`.
|
185 |
|
186 | Requires `maxSize` to be set.
|
187 |
|
188 | If the `size` (or return value of `sizeCalculation`) for a given
|
189 | entry is greater than `maxEntrySize`, then the item will not be
|
190 | added to the cache.
|
191 |
|
192 | ### `fetchMethod` (read only)
|
193 |
|
194 | Function that is used to make background asynchronous fetches.
|
195 | Called with `fetchMethod(key, staleValue, { signal, options,
|
196 | context })`. May return a Promise.
|
197 |
|
198 | If `fetchMethod` is not provided, then `cache.fetch(key)` is
|
199 | equivalent to `Promise.resolve(cache.get(key))`.
|
200 |
|
201 | If at any time, `signal.aborted` is set to `true`, or if the
|
202 | `signal.onabort` method is called, or if it emits an `'abort'`
|
203 | event which you can listen to with `addEventListener`, then that
|
204 | means that the fetch should be abandoned. This may be passed
|
205 | along to async functions aware of AbortController/AbortSignal
|
206 | behavior.
|
207 |
|
208 | The `fetchMethod` should **only** return `undefined` or a Promise
|
209 | resolving to `undefined` if the AbortController signaled an
|
210 | `abort` event. In all other cases, it should return or resolve
|
211 | to a value suitable for adding to the cache.
|
212 |
|
213 | The `options` object is a union of the options that may be
|
214 | provided to `set()` and `get()`. If they are modified, then that
|
215 | will result in modifying the settings to `cache.set()` when the
|
216 | value is resolved, and in the case of `noDeleteOnFetchRejection`
|
217 | and `allowStaleOnFetchRejection`, the handling of `fetchMethod`
|
218 | failures.
|
219 |
|
220 | For example, a DNS cache may update the TTL based on the value
|
221 | returned from a remote DNS server by changing `options.ttl` in
|
222 | the `fetchMethod`.
|
223 |
|
224 | ### `noDeleteOnFetchRejection`
|
225 |
|
226 | If a `fetchMethod` throws an error or returns a rejected promise,
|
227 | then by default, any existing stale value will be removed from
|
228 | the cache.
|
229 |
|
230 | If `noDeleteOnFetchRejection` is set to `true`, then this
|
231 | behavior is suppressed, and the stale value remains in the cache
|
232 | in the case of a rejected `fetchMethod`.
|
233 |
|
234 | This is important in cases where a `fetchMethod` is _only_ called
|
235 | as a background update while the stale value is returned, when
|
236 | `allowStale` is used.
|
237 |
|
238 | This is implicitly in effect when `allowStaleOnFetchRejection` is
|
239 | set.
|
240 |
|
241 | This may be set in calls to `fetch()`, or defaulted on the
|
242 | constructor, or overridden by modifying the options object in the
|
243 | `fetchMethod`.
|
244 |
|
245 | ### `allowStaleOnFetchRejection`
|
246 |
|
247 | Set to true to return a stale value from the cache when a
|
248 | `fetchMethod` throws an error or returns a rejected Promise.
|
249 |
|
250 | If a `fetchMethod` fails, and there is no stale value available,
|
251 | the `fetch()` will resolve to `undefined`. Ie, all `fetchMethod`
|
252 | errors are suppressed.
|
253 |
|
254 | Implies `noDeleteOnFetchRejection`.
|
255 |
|
256 | This may be set in calls to `fetch()`, or defaulted on the
|
257 | constructor, or overridden by modifying the options object in the
|
258 | `fetchMethod`.
|
259 |
|
260 | ### `allowStaleOnFetchAbort`
|
261 |
|
262 | Set to true to return a stale value from the cache when the
|
263 | `AbortSignal` passed to the `fetchMethod` dispatches an `'abort'`
|
264 | event, whether user-triggered, or due to internal cache behavior.
|
265 |
|
266 | Unless `ignoreFetchAbort` is also set, the underlying
|
267 | `fetchMethod` will still be considered canceled, and any value
|
268 | it returns will be ignored and not cached.
|
269 |
|
270 | Caveat: since fetches are aborted when a new value is explicitly
|
271 | set in the cache, this can lead to fetch returning a stale value,
|
272 | since that was the fallback value _at the moment the `fetch()` was
|
273 | initiated_, even though the new updated value is now present in
|
274 | the cache.
|
275 |
|
276 | For example:
|
277 |
|
278 | ```ts
|
279 | const cache = new LRUCache<string, any>({
|
280 | ttl: 100,
|
281 | fetchMethod: async (url, oldValue, { signal }) => {
|
282 | const res = await fetch(url, { signal })
|
283 | return await res.json()
|
284 | }
|
285 | })
|
286 | cache.set('https://example.com/', { some: 'data' })
|
287 | // 100ms go by...
|
288 | const result = cache.fetch('https://example.com/')
|
289 | cache.set('https://example.com/', { other: 'thing' })
|
290 | console.log(await result) // { some: 'data' }
|
291 | console.log(cache.get('https://example.com/')) // { other: 'thing' }
|
292 | ```
|
293 |
|
294 | ### `ignoreFetchAbort`
|
295 |
|
296 | Set to true to ignore the `abort` event emitted by the
|
297 | `AbortSignal` object passed to `fetchMethod`, and still cache the
|
298 | resulting resolution value, as long as it is not `undefined`.
|
299 |
|
300 | When used on its own, this means aborted `fetch()` calls are not
|
301 | immediately resolved or rejected when they are aborted, and
|
302 | instead take the full time to await.
|
303 |
|
304 | When used with `allowStaleOnFetchAbort`, aborted `fetch()` calls
|
305 | will resolve immediately to their stale cached value or
|
306 | `undefined`, and will continue to process and eventually update
|
307 | the cache when they resolve, as long as the resulting value is
|
308 | not `undefined`, thus supporting a "return stale on timeout while
|
309 | refreshing" mechanism by passing `AbortSignal.timeout(n)` as the
|
310 | signal.
|
311 |
|
312 | For example:
|
313 |
|
314 | ```js
|
315 | const c = new LRUCache({
|
316 | ttl: 100,
|
317 | ignoreFetchAbort: true,
|
318 | allowStaleOnFetchAbort: true,
|
319 | fetchMethod: async (key, oldValue, { signal }) => {
|
320 | // note: do NOT pass the signal to fetch()!
|
321 | // let's say this fetch can take a long time.
|
322 | const res = await fetch(`https://slow-backend-server/${key}`)
|
323 | return await res.json()
|
324 | },
|
325 | })
|
326 |
|
327 | // this will return the stale value after 100ms, while still
|
328 | // updating in the background for next time.
|
329 | const val = await c.fetch('key', { signal: AbortSignal.timeout(100) })
|
330 | ```
|
331 |
|
332 | **Note**: regardless of this setting, an `abort` event _is still
|
333 | emitted on the `AbortSignal` object_, so may result in invalid
|
334 | results when passed to other underlying APIs that use
|
335 | AbortSignals.
|
336 |
|
337 | This may be overridden on the `fetch()` call or in the
|
338 | `fetchMethod` itself.
|
339 |
|
340 | ### `dispose` (read only)
|
341 |
|
342 | Function that is called on items when they are dropped from the
|
343 | cache, as `this.dispose(value, key, reason)`.
|
344 |
|
345 | This can be handy if you want to close file descriptors or do
|
346 | other cleanup tasks when items are no longer stored in the cache.
|
347 |
|
348 | **NOTE**: It is called _before_ the item has been fully removed
|
349 | from the cache, so if you want to put it right back in, you need
|
350 | to wait until the next tick. If you try to add it back in during
|
351 | the `dispose()` function call, it will break things in subtle and
|
352 | weird ways.
|
353 |
|
354 | Unlike several other options, this may _not_ be overridden by
|
355 | passing an option to `set()`, for performance reasons.
|
356 |
|
357 | The `reason` will be one of the following strings, corresponding
|
358 | to the reason for the item's deletion:
|
359 |
|
360 | - `evict` Item was evicted to make space for a new addition
|
361 | - `set` Item was overwritten by a new value
|
362 | - `delete` Item was removed by explicit `cache.delete(key)` or by
|
363 | calling `cache.clear()`, which deletes everything.
|
364 |
|
365 | The `dispose()` method is _not_ called for canceled calls to
|
366 | `fetchMethod()`. If you wish to handle evictions, overwrites,
|
367 | and deletes of in-flight asynchronous fetches, you must use the
|
368 | `AbortSignal` provided.
|
369 |
|
370 | Optional, must be a function.
|
371 |
|
372 | ### `disposeAfter` (read only)
|
373 |
|
374 | The same as `dispose`, but called _after_ the entry is completely
|
375 | removed and the cache is once again in a clean state.
|
376 |
|
377 | It is safe to add an item right back into the cache at this
|
378 | point. However, note that it is _very_ easy to inadvertently
|
379 | create infinite recursion in this way.
|
380 |
|
381 | The `disposeAfter()` method is _not_ called for canceled calls to
|
382 | `fetchMethod()`. If you wish to handle evictions, overwrites,
|
383 | and deletes of in-flight asynchronous fetches, you must use the
|
384 | `AbortSignal` provided.
|
385 |
|
386 | ### `noDisposeOnSet`
|
387 |
|
388 | Set to `true` to suppress calling the `dispose()` function if the
|
389 | entry key is still accessible within the cache.
|
390 |
|
391 | This may be overridden by passing an options object to
|
392 | `cache.set()`.
|
393 |
|
394 | Boolean, default `false`. Only relevant if `dispose` or
|
395 | `disposeAfter` options are set.
|
396 |
|
397 | ### `ttl`
|
398 |
|
399 | Max time to live for items before they are considered stale.
|
400 | Note that stale items are NOT preemptively removed by default,
|
401 | and MAY live in the cache, contributing to its LRU max, long
|
402 | after they have expired.
|
403 |
|
404 | Also, as this cache is optimized for LRU/MRU operations, some of
|
405 | the staleness/TTL checks will reduce performance.
|
406 |
|
407 | This is not primarily a TTL cache, and does not make strong TTL
|
408 | guarantees. There is no pre-emptive pruning of expired items,
|
409 | but you _may_ set a TTL on the cache, and it will treat expired
|
410 | items as missing when they are fetched, and delete them.
|
411 |
|
412 | Optional, but must be a positive integer in ms if specified.
|
413 |
|
414 | This may be overridden by passing an options object to
|
415 | `cache.set()`.
|
416 |
|
417 | At least one of `max`, `maxSize`, or `TTL` is required. This
|
418 | must be a positive integer if set.
|
419 |
|
420 | Even if ttl tracking is enabled, **it is strongly recommended to
|
421 | set a `max` to prevent unbounded growth of the cache.** See
|
422 | "Storage Bounds Safety" below.
|
423 |
|
424 | If ttl tracking is enabled, and `max` and `maxSize` are not set,
|
425 | and `ttlAutopurge` is not set, then a warning will be emitted
|
426 | cautioning about the potential for unbounded memory consumption.
|
427 | (The TypeScript definitions will also discourage this.)
|
428 |
|
429 | ### `noUpdateTTL`
|
430 |
|
431 | Boolean flag to tell the cache to not update the TTL when setting
|
432 | a new value for an existing key (ie, when updating a value rather
|
433 | than inserting a new value). Note that the TTL value is _always_
|
434 | set (if provided) when adding a new entry into the cache.
|
435 |
|
436 | This may be passed as an option to `cache.set()`.
|
437 |
|
438 | Boolean, default false.
|
439 |
|
440 | ### `ttlResolution`
|
441 |
|
442 | Minimum amount of time in ms in which to check for staleness.
|
443 | Defaults to `1`, which means that the current time is checked at
|
444 | most once per millisecond.
|
445 |
|
446 | Set to `0` to check the current time every time staleness is
|
447 | tested.
|
448 |
|
449 | Note that setting this to a higher value _will_ improve
|
450 | performance somewhat while using ttl tracking, albeit at the
|
451 | expense of keeping stale items around a bit longer than intended.
|
452 |
|
453 | ### `ttlAutopurge`
|
454 |
|
455 | Preemptively remove stale items from the cache.
|
456 |
|
457 | Note that this may _significantly_ degrade performance,
|
458 | especially if the cache is storing a large number of items. It
|
459 | is almost always best to just leave the stale items in the cache,
|
460 | and let them fall out as new items are added.
|
461 |
|
462 | Note that this means that `allowStale` is a bit pointless, as
|
463 | stale items will be deleted almost as soon as they expire.
|
464 |
|
465 | Use with caution!
|
466 |
|
467 | Boolean, default `false`
|
468 |
|
469 | ### `allowStale`
|
470 |
|
471 | By default, if you set `ttl`, it'll only delete stale items from
|
472 | the cache when you `get(key)`. That is, it's not preemptively
|
473 | pruning items.
|
474 |
|
475 | If you set `allowStale:true`, it'll return the stale value as
|
476 | well as deleting it. If you don't set this, then it'll return
|
477 | `undefined` when you try to get a stale entry.
|
478 |
|
479 | Note that when a stale entry is fetched, _even if it is returned
|
480 | due to `allowStale` being set_, it is removed from the cache
|
481 | immediately. You can immediately put it back in the cache if you
|
482 | wish, thus resetting the TTL.
|
483 |
|
484 | This may be overridden by passing an options object to
|
485 | `cache.get()`. The `cache.has()` method will always return
|
486 | `false` for stale items.
|
487 |
|
488 | Boolean, default false, only relevant if `ttl` is set.
|
489 |
|
490 | ### `noDeleteOnStaleGet`
|
491 |
|
492 | When using time-expiring entries with `ttl`, by default stale
|
493 | items will be removed from the cache when the key is accessed
|
494 | with `cache.get()`.
|
495 |
|
496 | Setting `noDeleteOnStaleGet` to `true` will cause stale items to
|
497 | remain in the cache, until they are explicitly deleted with
|
498 | `cache.delete(key)`, or retrieved with `noDeleteOnStaleGet` set
|
499 | to `false`.
|
500 |
|
501 | This may be overridden by passing an options object to
|
502 | `cache.get()`.
|
503 |
|
504 | Boolean, default false, only relevant if `ttl` is set.
|
505 |
|
506 | ### `updateAgeOnGet`
|
507 |
|
508 | When using time-expiring entries with `ttl`, setting this to
|
509 | `true` will make each item's age reset to 0 whenever it is
|
510 | retrieved from cache with `get()`, causing it to not expire. (It
|
511 | can still fall out of cache based on recency of use, of course.)
|
512 |
|
513 | This may be overridden by passing an options object to
|
514 | `cache.get()`.
|
515 |
|
516 | Boolean, default false, only relevant if `ttl` is set.
|
517 |
|
518 | ### `updateAgeOnHas`
|
519 |
|
520 | When using time-expiring entries with `ttl`, setting this to
|
521 | `true` will make each item's age reset to 0 whenever its presence
|
522 | in the cache is checked with `has()`, causing it to not expire.
|
523 | (It can still fall out of cache based on recency of use, of
|
524 | course.)
|
525 |
|
526 | This may be overridden by passing an options object to
|
527 | `cache.has()`.
|
528 |
|
529 | Boolean, default false, only relevant if `ttl` is set.
|
530 |
|
531 | ## API
|
532 |
|
533 | ### `new LRUCache<K, V, FC = unknown>(options)`
|
534 |
|
535 | Create a new LRUCache. All options are documented above, and are
|
536 | on the cache as public members.
|
537 |
|
538 | The `K` and `V` types define the key and value types,
|
539 | respectively. The optional `FC` type defines the type of the
|
540 | `context` object passed to `cache.fetch()`.
|
541 |
|
542 | Keys and values **must not** be `null` or `undefined`.
|
543 |
|
544 | ### `cache.max`, `cache.maxSize`, `cache.allowStale`,
|
545 |
|
546 | `cache.noDisposeOnSet`, `cache.sizeCalculation`, `cache.dispose`,
|
547 | `cache.maxSize`, `cache.ttl`, `cache.updateAgeOnGet`,
|
548 | `cache.updateAgeOnHas`
|
549 |
|
550 | All option names are exposed as public members on the cache
|
551 | object.
|
552 |
|
553 | These are intended for read access only. Changing them during
|
554 | program operation can cause undefined behavior.
|
555 |
|
556 | ### `cache.size`
|
557 |
|
558 | The total number of items held in the cache at the current
|
559 | moment.
|
560 |
|
561 | ### `cache.calculatedSize`
|
562 |
|
563 | The total size of items in cache when using size tracking.
|
564 |
|
565 | ### `set(key, value, [{ size, sizeCalculation, ttl, noDisposeOnSet, start, status }])`
|
566 |
|
567 | Add a value to the cache.
|
568 |
|
569 | Optional options object may contain `ttl` and `sizeCalculation`
|
570 | as described above, which default to the settings on the cache
|
571 | object.
|
572 |
|
573 | If `start` is provided, then that will set the effective start
|
574 | time for the TTL calculation. Note that this must be a previous
|
575 | value of `performance.now()` if supported, or a previous value of
|
576 | `Date.now()` if not.
|
577 |
|
578 | Options object may also include `size`, which will prevent
|
579 | calling the `sizeCalculation` function and just use the specified
|
580 | number if it is a positive integer, and `noDisposeOnSet` which
|
581 | will prevent calling a `dispose` function in the case of
|
582 | overwrites.
|
583 |
|
584 | If the `size` (or return value of `sizeCalculation`) for a given
|
585 | entry is greater than `maxEntrySize`, then the item will not be
|
586 | added to the cache.
|
587 |
|
588 | Will update the recency of the entry.
|
589 |
|
590 | Returns the cache object.
|
591 |
|
592 | For the usage of the `status` option, see **Status Tracking**
|
593 | below.
|
594 |
|
595 | If the value is `undefined`, then this is an alias for
|
596 | `cache.delete(key)`. `undefined` is never stored in the cache.
|
597 | See **Storing Undefined Values** below.
|
598 |
|
599 | ### `get(key, { updateAgeOnGet, allowStale, status } = {}) => value`
|
600 |
|
601 | Return a value from the cache.
|
602 |
|
603 | Will update the recency of the cache entry found.
|
604 |
|
605 | If the key is not found, `get()` will return `undefined`.
|
606 |
|
607 | For the usage of the `status` option, see **Status Tracking**
|
608 | below.
|
609 |
|
610 | ### `async fetch(key, options = {}) => Promise`
|
611 |
|
612 | The following options are supported:
|
613 |
|
614 | - `updateAgeOnGet`
|
615 | - `allowStale`
|
616 | - `size`
|
617 | - `sizeCalculation`
|
618 | - `ttl`
|
619 | - `noDisposeOnSet`
|
620 | - `forceRefresh`
|
621 | - `status` - See **Status Tracking** below.
|
622 | - `signal` - AbortSignal can be used to cancel the `fetch()`.
|
623 | Note that the `signal` option provided to the `fetchMethod` is
|
624 | a different object, because it must also respond to internal
|
625 | cache state changes, but aborting this signal will abort the
|
626 | one passed to `fetchMethod` as well.
|
627 | - `context` - sets the `context` option passed to the underlying
|
628 | `fetchMethod`.
|
629 |
|
630 | If the value is in the cache and not stale, then the returned
|
631 | Promise resolves to the value.
|
632 |
|
633 | If not in the cache, or beyond its TTL staleness, then
|
634 | `fetchMethod(key, staleValue, { options, signal, context })` is
|
635 | called, and the value returned will be added to the cache once
|
636 | resolved.
|
637 |
|
638 | If called with `allowStale`, and an asynchronous fetch is
|
639 | currently in progress to reload a stale value, then the former
|
640 | stale value will be returned.
|
641 |
|
642 | If called with `forceRefresh`, then the cached item will be
|
643 | re-fetched, even if it is not stale. However, if `allowStale` is
|
644 | set, then the old value will still be returned. This is useful
|
645 | in cases where you want to force a reload of a cached value. If
|
646 | a background fetch is already in progress, then `forceRefresh`
|
647 | has no effect.
|
648 |
|
649 | Multiple fetches for the same `key` will only call `fetchMethod`
|
650 | a single time, and all will be resolved when the value is
|
651 | resolved, even if different options are used.
|
652 |
|
653 | If `fetchMethod` is not specified, then this is effectively an
|
654 | alias for `Promise.resolve(cache.get(key))`.
|
655 |
|
656 | When the fetch method resolves to a value, if the fetch has not
|
657 | been aborted due to deletion, eviction, or being overwritten,
|
658 | then it is added to the cache using the options provided.
|
659 |
|
660 | If the key is evicted or deleted before the `fetchMethod`
|
661 | resolves, then the AbortSignal passed to the `fetchMethod` will
|
662 | receive an `abort` event, and the promise returned by `fetch()`
|
663 | will reject with the reason for the abort.
|
664 |
|
665 | If a `signal` is passed to the `fetch()` call, then aborting the
|
666 | signal will abort the fetch and cause the `fetch()` promise to
|
667 | reject with the reason provided.
|
668 |
|
669 | #### Setting `context`
|
670 |
|
671 | If an `FC` type is set to a type other than `unknown`, `void`, or
|
672 | `undefined` in the LRUCache constructor, then all
|
673 | calls to `cache.fetch()` _must_ provide a `context` option. If
|
674 | set to `undefined` or `void`, then calls to fetch _must not_
|
675 | provide a `context` option.
|
676 |
|
677 | The `context` param allows you to provide arbitrary data that
|
678 | might be relevant in the course of fetching the data. It is only
|
679 | relevant for the course of a single `fetch()` operation, and
|
680 | discarded afterwards.
|
681 |
|
682 | #### Note: `fetch()` calls are inflight-unique
|
683 |
|
684 | If you call `fetch()` multiple times with the same key value,
|
685 | then every call after the first will resolve on the same
|
686 | promise<sup>1</sup>,
|
687 | _even if they have different settings that would otherwise change
|
688 | the behvavior of the fetch_, such as `noDeleteOnFetchRejection`
|
689 | or `ignoreFetchAbort`.
|
690 |
|
691 | In most cases, this is not a problem (in fact, only fetching
|
692 | something once is what you probably want, if you're caching in
|
693 | the first place). If you are changing the fetch() options
|
694 | dramatically between runs, there's a good chance that you might
|
695 | be trying to fit divergent semantics into a single object, and
|
696 | would be better off with multiple cache instances.
|
697 |
|
698 | **1**: Ie, they're not the "same Promise", but they resolve at
|
699 | the same time, because they're both waiting on the same
|
700 | underlying fetchMethod response.
|
701 |
|
702 | ### `peek(key, { allowStale } = {}) => value`
|
703 |
|
704 | Like `get()` but doesn't update recency or delete stale items.
|
705 |
|
706 | Returns `undefined` if the item is stale, unless `allowStale` is
|
707 | set either on the cache or in the options object.
|
708 |
|
709 | ### `has(key, { updateAgeOnHas, status } = {}) => Boolean`
|
710 |
|
711 | Check if a key is in the cache, without updating the recency of
|
712 | use. Age is updated if `updateAgeOnHas` is set to `true` in
|
713 | either the options or the constructor.
|
714 |
|
715 | Will return `false` if the item is stale, even though it is
|
716 | technically in the cache. The difference can be determined (if
|
717 | it matters) by using a `status` argument, and inspecting the
|
718 | `has` field.
|
719 |
|
720 | For the usage of the `status` option, see **Status Tracking**
|
721 | below.
|
722 |
|
723 | ### `delete(key)`
|
724 |
|
725 | Deletes a key out of the cache.
|
726 |
|
727 | Returns `true` if the key was deleted, `false` otherwise.
|
728 |
|
729 | ### `clear()`
|
730 |
|
731 | Clear the cache entirely, throwing away all values.
|
732 |
|
733 | ### `keys()`
|
734 |
|
735 | Return a generator yielding the keys in the cache, in order from
|
736 | most recently used to least recently used.
|
737 |
|
738 | ### `rkeys()`
|
739 |
|
740 | Return a generator yielding the keys in the cache, in order from
|
741 | least recently used to most recently used.
|
742 |
|
743 | ### `values()`
|
744 |
|
745 | Return a generator yielding the values in the cache, in order
|
746 | from most recently used to least recently used.
|
747 |
|
748 | ### `rvalues()`
|
749 |
|
750 | Return a generator yielding the values in the cache, in order
|
751 | from least recently used to most recently used.
|
752 |
|
753 | ### `entries()`
|
754 |
|
755 | Return a generator yielding `[key, value]` pairs, in order from
|
756 | most recently used to least recently used.
|
757 |
|
758 | ### `rentries()`
|
759 |
|
760 | Return a generator yielding `[key, value]` pairs, in order from
|
761 | least recently used to most recently used.
|
762 |
|
763 | ### `find(fn, [getOptions])`
|
764 |
|
765 | Find a value for which the supplied `fn` method returns a truthy
|
766 | value, similar to `Array.find()`.
|
767 |
|
768 | `fn` is called as `fn(value, key, cache)`.
|
769 |
|
770 | The optional `getOptions` are applied to the resulting `get()` of
|
771 | the item found.
|
772 |
|
773 | ### `dump()`
|
774 |
|
775 | Return an array of `[key, entry]` objects which can be passed to
|
776 | `cache.load()`
|
777 |
|
778 | The `start` fields are calculated relative to a portable
|
779 | `Date.now()` timestamp, even if `performance.now()` is available.
|
780 |
|
781 | Stale entries are always included in the `dump`, even if
|
782 | `allowStale` is false.
|
783 |
|
784 | Note: this returns an actual array, not a generator, so it can be
|
785 | more easily passed around.
|
786 |
|
787 | ### `load(entries)`
|
788 |
|
789 | Reset the cache and load in the items in `entries` in the order
|
790 | listed. Note that the shape of the resulting cache may be
|
791 | different if the same options are not used in both caches.
|
792 |
|
793 | The `start` fields are assumed to be calculated relative to a
|
794 | portable `Date.now()` timestamp, even if `performance.now()` is
|
795 | available.
|
796 |
|
797 | ### `purgeStale()`
|
798 |
|
799 | Delete any stale entries. Returns `true` if anything was
|
800 | removed, `false` otherwise.
|
801 |
|
802 | ### `getRemainingTTL(key)`
|
803 |
|
804 | Return the number of ms left in the item's TTL. If item is not
|
805 | in cache, returns `0`. Returns `Infinity` if item is in cache
|
806 | without a defined TTL.
|
807 |
|
808 | ### `forEach(fn, [thisp])`
|
809 |
|
810 | Call the `fn` function with each set of `fn(value, key, cache)`
|
811 | in the LRU cache, from most recent to least recently used.
|
812 |
|
813 | Does not affect recency of use.
|
814 |
|
815 | If `thisp` is provided, function will be called in the
|
816 | `this`-context of the provided object.
|
817 |
|
818 | ### `rforEach(fn, [thisp])`
|
819 |
|
820 | Same as `cache.forEach(fn, thisp)`, but in order from least
|
821 | recently used to most recently used.
|
822 |
|
823 | ### `pop()`
|
824 |
|
825 | Evict the least recently used item, returning its value.
|
826 |
|
827 | Returns `undefined` if cache is empty.
|
828 |
|
829 | ## Status Tracking
|
830 |
|
831 | Occasionally, it may be useful to track the internal behavior of
|
832 | the cache, particularly for logging, debugging, or for behavior
|
833 | within the `fetchMethod`. To do this, you can pass a `status`
|
834 | object to the `get()`, `set()`, `has()`, and `fetch()` methods.
|
835 |
|
836 | The `status` option should be a plain JavaScript object.
|
837 |
|
838 | The following fields will be set appropriately:
|
839 |
|
840 | ```ts
|
841 | interface Status<V> {
|
842 | /**
|
843 | * The status of a set() operation.
|
844 | *
|
845 | * - add: the item was not found in the cache, and was added
|
846 | * - update: the item was in the cache, with the same value provided
|
847 | * - replace: the item was in the cache, and replaced
|
848 | * - miss: the item was not added to the cache for some reason
|
849 | */
|
850 | set?: 'add' | 'update' | 'replace' | 'miss'
|
851 |
|
852 | /**
|
853 | * the ttl stored for the item, or undefined if ttls are not used.
|
854 | */
|
855 | ttl?: LRUMilliseconds
|
856 |
|
857 | /**
|
858 | * the start time for the item, or undefined if ttls are not used.
|
859 | */
|
860 | start?: LRUMilliseconds
|
861 |
|
862 | /**
|
863 | * The timestamp used for TTL calculation
|
864 | */
|
865 | now?: LRUMilliseconds
|
866 |
|
867 | /**
|
868 | * the remaining ttl for the item, or undefined if ttls are not used.
|
869 | */
|
870 | remainingTTL?: LRUMilliseconds
|
871 |
|
872 | /**
|
873 | * The calculated size for the item, if sizes are used.
|
874 | */
|
875 | size?: LRUSize
|
876 |
|
877 | /**
|
878 | * A flag indicating that the item was not stored, due to exceeding the
|
879 | * {@link maxEntrySize}
|
880 | */
|
881 | maxEntrySizeExceeded?: true
|
882 |
|
883 | /**
|
884 | * The old value, specified in the case of `set:'update'` or
|
885 | * `set:'replace'`
|
886 | */
|
887 | oldValue?: V
|
888 |
|
889 | /**
|
890 | * The results of a {@link has} operation
|
891 | *
|
892 | * - hit: the item was found in the cache
|
893 | * - stale: the item was found in the cache, but is stale
|
894 | * - miss: the item was not found in the cache
|
895 | */
|
896 | has?: 'hit' | 'stale' | 'miss'
|
897 |
|
898 | /**
|
899 | * The status of a {@link fetch} operation.
|
900 | * Note that this can change as the underlying fetch() moves through
|
901 | * various states.
|
902 | *
|
903 | * - inflight: there is another fetch() for this key which is in process
|
904 | * - get: there is no fetchMethod, so {@link get} was called.
|
905 | * - miss: the item is not in cache, and will be fetched.
|
906 | * - hit: the item is in the cache, and was resolved immediately.
|
907 | * - stale: the item is in the cache, but stale.
|
908 | * - refresh: the item is in the cache, and not stale, but
|
909 | * {@link forceRefresh} was specified.
|
910 | */
|
911 | fetch?: 'get' | 'inflight' | 'miss' | 'hit' | 'stale' | 'refresh'
|
912 |
|
913 | /**
|
914 | * The {@link fetchMethod} was called
|
915 | */
|
916 | fetchDispatched?: true
|
917 |
|
918 | /**
|
919 | * The cached value was updated after a successful call to fetchMethod
|
920 | */
|
921 | fetchUpdated?: true
|
922 |
|
923 | /**
|
924 | * The reason for a fetch() rejection. Either the error raised by the
|
925 | * {@link fetchMethod}, or the reason for an AbortSignal.
|
926 | */
|
927 | fetchError?: Error
|
928 |
|
929 | /**
|
930 | * The fetch received an abort signal
|
931 | */
|
932 | fetchAborted?: true
|
933 |
|
934 | /**
|
935 | * The abort signal received was ignored, and the fetch was allowed to
|
936 | * continue.
|
937 | */
|
938 | fetchAbortIgnored?: true
|
939 |
|
940 | /**
|
941 | * The fetchMethod promise resolved successfully
|
942 | */
|
943 | fetchResolved?: true
|
944 |
|
945 | /**
|
946 | * The results of the fetchMethod promise were stored in the cache
|
947 | */
|
948 | fetchUpdated?: true
|
949 |
|
950 | /**
|
951 | * The fetchMethod promise was rejected
|
952 | */
|
953 | fetchRejected?: true
|
954 |
|
955 | /**
|
956 | * The status of a {@link get} operation.
|
957 | *
|
958 | * - fetching: The item is currently being fetched. If a previous value is
|
959 | * present and allowed, that will be returned.
|
960 | * - stale: The item is in the cache, and is stale.
|
961 | * - hit: the item is in the cache
|
962 | * - miss: the item is not in the cache
|
963 | */
|
964 | get?: 'stale' | 'hit' | 'miss'
|
965 |
|
966 | /**
|
967 | * A fetch or get operation returned a stale value.
|
968 | */
|
969 | returnedStale?: true
|
970 | }
|
971 | ```
|
972 |
|
973 | ## Storage Bounds Safety
|
974 |
|
975 | This implementation aims to be as flexible as possible, within
|
976 | the limits of safe memory consumption and optimal performance.
|
977 |
|
978 | At initial object creation, storage is allocated for `max` items.
|
979 | If `max` is set to zero, then some performance is lost, and item
|
980 | count is unbounded. Either `maxSize` or `ttl` _must_ be set if
|
981 | `max` is not specified.
|
982 |
|
983 | If `maxSize` is set, then this creates a safe limit on the
|
984 | maximum storage consumed, but without the performance benefits of
|
985 | pre-allocation. When `maxSize` is set, every item _must_ provide
|
986 | a size, either via the `sizeCalculation` method provided to the
|
987 | constructor, or via a `size` or `sizeCalculation` option provided
|
988 | to `cache.set()`. The size of every item _must_ be a positive
|
989 | integer.
|
990 |
|
991 | If neither `max` nor `maxSize` are set, then `ttl` tracking must
|
992 | be enabled. Note that, even when tracking item `ttl`, items are
|
993 | _not_ preemptively deleted when they become stale, unless
|
994 | `ttlAutopurge` is enabled. Instead, they are only purged the
|
995 | next time the key is requested. Thus, if `ttlAutopurge`, `max`,
|
996 | and `maxSize` are all not set, then the cache will potentially
|
997 | grow unbounded.
|
998 |
|
999 | In this case, a warning is printed to standard error. Future
|
1000 | versions may require the use of `ttlAutopurge` if `max` and
|
1001 | `maxSize` are not specified.
|
1002 |
|
1003 | If you truly wish to use a cache that is bound _only_ by TTL
|
1004 | expiration, consider using a `Map` object, and calling
|
1005 | `setTimeout` to delete entries when they expire. It will perform
|
1006 | much better than an LRU cache.
|
1007 |
|
1008 | Here is an implementation you may use, under the same
|
1009 | [license](./LICENSE) as this package:
|
1010 |
|
1011 | ```js
|
1012 | // a storage-unbounded ttl cache that is not an lru-cache
|
1013 | const cache = {
|
1014 | data: new Map(),
|
1015 | timers: new Map(),
|
1016 | set: (k, v, ttl) => {
|
1017 | if (cache.timers.has(k)) {
|
1018 | clearTimeout(cache.timers.get(k))
|
1019 | }
|
1020 | cache.timers.set(
|
1021 | k,
|
1022 | setTimeout(() => cache.delete(k), ttl)
|
1023 | )
|
1024 | cache.data.set(k, v)
|
1025 | },
|
1026 | get: k => cache.data.get(k),
|
1027 | has: k => cache.data.has(k),
|
1028 | delete: k => {
|
1029 | if (cache.timers.has(k)) {
|
1030 | clearTimeout(cache.timers.get(k))
|
1031 | }
|
1032 | cache.timers.delete(k)
|
1033 | return cache.data.delete(k)
|
1034 | },
|
1035 | clear: () => {
|
1036 | cache.data.clear()
|
1037 | for (const v of cache.timers.values()) {
|
1038 | clearTimeout(v)
|
1039 | }
|
1040 | cache.timers.clear()
|
1041 | },
|
1042 | }
|
1043 | ```
|
1044 |
|
1045 | If that isn't to your liking, check out
|
1046 | [@isaacs/ttlcache](http://npm.im/@isaacs/ttlcache).
|
1047 |
|
1048 | ## Storing Undefined Values
|
1049 |
|
1050 | This cache never stores undefined values, as `undefined` is used
|
1051 | internally in a few places to indicate that a key is not in the
|
1052 | cache.
|
1053 |
|
1054 | You may call `cache.set(key, undefined)`, but this is just an
|
1055 | an alias for `cache.delete(key)`. Note that this has the effect
|
1056 | that `cache.has(key)` will return _false_ after setting it to
|
1057 | undefined.
|
1058 |
|
1059 | ```js
|
1060 | cache.set(myKey, undefined)
|
1061 | cache.has(myKey) // false!
|
1062 | ```
|
1063 |
|
1064 | If you need to track `undefined` values, and still note that the
|
1065 | key is in the cache, an easy workaround is to use a sigil object
|
1066 | of your own.
|
1067 |
|
1068 | ```js
|
1069 | import { LRUCache } from 'lru-cache'
|
1070 | const undefinedValue = Symbol('undefined')
|
1071 | const cache = new LRUCache(...)
|
1072 | const mySet = (key, value) =>
|
1073 | cache.set(key, value === undefined ? undefinedValue : value)
|
1074 | const myGet = (key, value) => {
|
1075 | const v = cache.get(key)
|
1076 | return v === undefinedValue ? undefined : v
|
1077 | }
|
1078 | ```
|
1079 |
|
1080 | ## Performance
|
1081 |
|
1082 | As of January 2022, version 7 of this library is one of the most
|
1083 | performant LRU cache implementations in JavaScript.
|
1084 |
|
1085 | Benchmarks can be extremely difficult to get right. In
|
1086 | particular, the performance of set/get/delete operations on
|
1087 | objects will vary _wildly_ depending on the type of key used. V8
|
1088 | is highly optimized for objects with keys that are short strings,
|
1089 | especially integer numeric strings. Thus any benchmark which
|
1090 | tests _solely_ using numbers as keys will tend to find that an
|
1091 | object-based approach performs the best.
|
1092 |
|
1093 | Note that coercing _anything_ to strings to use as object keys is
|
1094 | unsafe, unless you can be 100% certain that no other type of
|
1095 | value will be used. For example:
|
1096 |
|
1097 | ```js
|
1098 | const myCache = {}
|
1099 | const set = (k, v) => (myCache[k] = v)
|
1100 | const get = k => myCache[k]
|
1101 |
|
1102 | set({}, 'please hang onto this for me')
|
1103 | set('[object Object]', 'oopsie')
|
1104 | ```
|
1105 |
|
1106 | Also beware of "Just So" stories regarding performance. Garbage
|
1107 | collection of large (especially: deep) object graphs can be
|
1108 | incredibly costly, with several "tipping points" where it
|
1109 | increases exponentially. As a result, putting that off until
|
1110 | later can make it much worse, and less predictable. If a library
|
1111 | performs well, but only in a scenario where the object graph is
|
1112 | kept shallow, then that won't help you if you are using large
|
1113 | objects as keys.
|
1114 |
|
1115 | In general, when attempting to use a library to improve
|
1116 | performance (such as a cache like this one), it's best to choose
|
1117 | an option that will perform well in the sorts of scenarios where
|
1118 | you'll actually use it.
|
1119 |
|
1120 | This library is optimized for repeated gets and minimizing
|
1121 | eviction time, since that is the expected need of a LRU. Set
|
1122 | operations are somewhat slower on average than a few other
|
1123 | options, in part because of that optimization. It is assumed
|
1124 | that you'll be caching some costly operation, ideally as rarely
|
1125 | as possible, so optimizing set over get would be unwise.
|
1126 |
|
1127 | If performance matters to you:
|
1128 |
|
1129 | 1. If it's at all possible to use small integer values as keys,
|
1130 | and you can guarantee that no other types of values will be
|
1131 | used as keys, then do that, and use a cache such as
|
1132 | [lru-fast](https://npmjs.com/package/lru-fast), or
|
1133 | [mnemonist's
|
1134 | LRUCache](https://yomguithereal.github.io/mnemonist/lru-cache)
|
1135 | which uses an Object as its data store.
|
1136 |
|
1137 | 2. Failing that, if at all possible, use short non-numeric
|
1138 | strings (ie, less than 256 characters) as your keys, and use
|
1139 | [mnemonist's
|
1140 | LRUCache](https://yomguithereal.github.io/mnemonist/lru-cache).
|
1141 |
|
1142 | 3. If the types of your keys will be anything else, especially
|
1143 | long strings, strings that look like floats, objects, or some
|
1144 | mix of types, or if you aren't sure, then this library will
|
1145 | work well for you.
|
1146 |
|
1147 | If you do not need the features that this library provides
|
1148 | (like asynchronous fetching, a variety of TTL staleness
|
1149 | options, and so on), then [mnemonist's
|
1150 | LRUMap](https://yomguithereal.github.io/mnemonist/lru-map) is
|
1151 | a very good option, and just slightly faster than this module
|
1152 | (since it does considerably less).
|
1153 |
|
1154 | 4. Do not use a `dispose` function, size tracking, or especially
|
1155 | ttl behavior, unless absolutely needed. These features are
|
1156 | convenient, and necessary in some use cases, and every attempt
|
1157 | has been made to make the performance impact minimal, but it
|
1158 | isn't nothing.
|
1159 |
|
1160 | ## Breaking Changes in Version 7
|
1161 |
|
1162 | This library changed to a different algorithm and internal data
|
1163 | structure in version 7, yielding significantly better
|
1164 | performance, albeit with some subtle changes as a result.
|
1165 |
|
1166 | If you were relying on the internals of LRUCache in version 6 or
|
1167 | before, it probably will not work in version 7 and above.
|
1168 |
|
1169 | ## Breaking Changes in Version 8
|
1170 |
|
1171 | - The `fetchContext` option was renamed to `context`, and may no
|
1172 | longer be set on the cache instance itself.
|
1173 | - Rewritten in TypeScript, so pretty much all the types moved
|
1174 | around a lot.
|
1175 | - The AbortController/AbortSignal polyfill was removed. For this
|
1176 | reason, **Node version 16.14.0 or higher is now required**.
|
1177 | - Internal properties were moved to actual private class
|
1178 | properties.
|
1179 | - Keys and values must not be `null` or `undefined`.
|
1180 | - Minified export available at `'lru-cache/min'`, for both CJS
|
1181 | and MJS builds.
|
1182 |
|
1183 | ## Changes in Version 9
|
1184 |
|
1185 | - Named export only, no default export.
|
1186 | - AbortController polyfill returned, albeit with a warning when
|
1187 | used.
|
1188 |
|
1189 | For more info, see the [change log](CHANGELOG.md).
|