1 | ![Continuous Integration](https://github.com/mike-marcacci/fs-capacitor/workflows/Continuous%20Integration/badge.svg) [![Current Version](https://badgen.net/npm/v/fs-capacitor)](https://npm.im/fs-capacitor) ![Supported Node.js Versions](https://badgen.net/npm/node/fs-capacitor)
|
2 |
|
3 | # FS Capacitor
|
4 |
|
5 | FS Capacitor is a filesystem buffer for finite node streams. It supports simultaneous read/write, and can be used to create multiple independent readable streams, each starting at the beginning of the buffer.
|
6 |
|
7 | This is useful for file uploads and other situations where you want to avoid delays to the source stream, but have slow downstream transformations to apply:
|
8 |
|
9 | ```js
|
10 | import fs from "fs";
|
11 | import http from "http";
|
12 | import { WriteStream } from "fs-capacitor";
|
13 |
|
14 | http.createServer((req, res) => {
|
15 | const capacitor = new WriteStream();
|
16 | const destination = fs.createWriteStream("destination.txt");
|
17 |
|
18 | // pipe data to the capacitor
|
19 | req.pipe(capacitor);
|
20 |
|
21 | // read data from the capacitor
|
22 | capacitor
|
23 | .createReadStream()
|
24 | .pipe(/* some slow Transform streams here */)
|
25 | .pipe(destination);
|
26 |
|
27 | // read data from the very beginning
|
28 | setTimeout(() => {
|
29 | capacitor.createReadStream().pipe(/* elsewhere */);
|
30 |
|
31 | // you can destroy a capacitor as soon as no more read streams are needed
|
32 | // without worrying if existing streams are fully consumed
|
33 | capacitor.destroy();
|
34 | }, 100);
|
35 | });
|
36 | ```
|
37 |
|
38 | It is especially important to use cases like [`graphql-upload`](https://github.com/jaydenseric/graphql-upload) where server code may need to stash earler parts of a stream until later parts have been processed, and needs to attach multiple consumers at different times.
|
39 |
|
40 | FS Capacitor creates its temporary files in the directory ideneified by `os.tmpdir()` and attempts to remove them:
|
41 |
|
42 | - after `writeStream.destroy()` has been called and all read streams are fully consumed or destroyed
|
43 | - before the process exits
|
44 |
|
45 | Please do note that FS Capacitor does NOT release disk space _as data is consumed_, and therefore is not suitable for use with infinite streams or those larger than the filesystem.
|
46 |
|
47 | ### Ensuring cleanup on termination by process signal
|
48 |
|
49 | FS Capacitor cleans up all of its temporary files before the process exits, by listening to the [node process's `exit` event](https://nodejs.org/api/process.html#process_event_exit). This event, however, is only emitted when the process is about to exit as a result of either:
|
50 |
|
51 | - The process.exit() method being called explicitly;
|
52 | - The Node.js event loop no longer having any additional work to perform.
|
53 |
|
54 | When the node process receives a `SIGINT`, `SIGTERM`, or `SIGHUP` signal and there is no handler, it will exit without emitting the `exit` event.
|
55 |
|
56 | Beginning in version 3, fs-capacitor will NOT listen for these signals. Instead, the application should handle these signals according to its own logic and call `process.exit()` when it is ready to exit. This allows the application to implement its own graceful shutdown procedures, such as waiting for a stream to finish.
|
57 |
|
58 | The following can be added to the application to ensure resources are cleaned up before a signal-induced exit:
|
59 |
|
60 | ```js
|
61 | function shutdown() {
|
62 | // Any sync or async graceful shutdown procedures can be run before exiting…
|
63 | process.exit(0);
|
64 | }
|
65 |
|
66 | process.on("SIGINT", shutdown);
|
67 | process.on("SIGTERM", shutdown);
|
68 | process.on("SIGHUP", shutdown);
|
69 | ```
|
70 |
|
71 | ## API
|
72 |
|
73 | ### WriteStream
|
74 |
|
75 | `WriteStream` extends [`stream.Writable`](https://nodejs.org/api/stream.html#stream_implementing_a_writable_stream)
|
76 |
|
77 | #### `new WriteStream(options: WriteStreamOptions)`
|
78 |
|
79 | Create a new `WriteStream` instance.
|
80 |
|
81 | #### `.createReadStream(options?: ReadStreamOptions): ReadStream`
|
82 |
|
83 | Create a new `ReadStream` instance attached to the `WriteStream` instance.
|
84 |
|
85 | Calling `.createReadStream()` on a released `WriteStream` will throw a `ReadAfterReleasedError` error.
|
86 |
|
87 | Calling `.createReadStream()` on a destroyed `WriteStream` will throw a `ReadAfterDestroyedError` error.
|
88 |
|
89 | As soon as a `ReadStream` ends or is closed (such as by calling `readStream.destroy()`), it is detached from its `WriteStream`.
|
90 |
|
91 | #### `.release(): void`
|
92 |
|
93 | Release the `WriteStream`'s claim on the underlying resources. Once called, destruction of underlying resources is performed as soon as all attached `ReadStream`s are removed.
|
94 |
|
95 | #### `.destroy(error?: ?Error): void`
|
96 |
|
97 | Destroy the `WriteStream` and all attached `ReadStream`s. If `error` is present, attached `ReadStream`s are destroyed with the same error.
|
98 |
|
99 | ### WriteStreamOptions
|
100 |
|
101 | #### `.highWaterMark?: number`
|
102 |
|
103 | Uses node's default of `16384` (16kb). Optional buffer size at which the writable stream will begin returning `false`. See [node's docs for `stream.Writable`](https://nodejs.org/api/stream.html#stream_constructor_new_stream_writable_options). For the curious, node has [a guide on backpressure in streams](https://nodejs.org/es/docs/guides/backpressuring-in-streams/).
|
104 |
|
105 | #### `.defaultEncoding`
|
106 |
|
107 | Uses node's default of `utf8`. Optional default encoding to use when no encoding is specified as an argument to `stream.write()`. See [node's docs for `stream.Writable`](https://nodejs.org/api/stream.html#stream_constructor_new_stream_writable_options). Possible values depend on the version of node, and are [defined in node's buffer implementation](https://github.com/nodejs/node/blob/master/lib/buffer.js);
|
108 |
|
109 | #### `.tmpdir`
|
110 |
|
111 | Used node's [`os.tmpdir`](https://nodejs.org/api/os.html#os_os_tmpdir) by default. This function returns the directory used by fs-capacitor to store file buffers, and is intended primarily for testing and debugging.
|
112 |
|
113 | ### ReadStream
|
114 |
|
115 | `ReadStream` extends [`stream.Readable`](https://nodejs.org/api/stream.html#stream_new_stream_readable_options);
|
116 |
|
117 | ### ReadStreamOptions
|
118 |
|
119 | #### `.highWaterMark`
|
120 |
|
121 | Uses node's default of `16384` (16kb). Optional value to use as the readable stream's highWaterMark, specifying the number of bytes (for binary data) or characters (for strings) that will be bufferred into memory. See [node's docs for `stream.Readable`](https://nodejs.org/api/stream.html#stream_new_stream_readable_options). For the curious, node has [a guide on backpressure in streams](https://nodejs.org/es/docs/guides/backpressuring-in-streams/).
|
122 |
|
123 | #### `.encoding`
|
124 |
|
125 | Uses node's default of `utf8`. Optional encoding to use when the stream's output is desired as a string. See [node's docs for `stream.Readable`](https://nodejs.org/api/stream.html#stream_new_stream_readable_options). Possible values depend on the version of node, and are [defined in node's buffer implementation](https://github.com/nodejs/node/blob/master/lib/buffer.js).
|