UNPKG

10 kBMarkdownView Raw
1# Worker Farm [![Build Status](https://secure.travis-ci.org/rvagg/node-worker-farm.png)](http://travis-ci.org/rvagg/node-worker-farm)
2
3[![NPM](https://nodei.co/npm/worker-farm.png?downloads=true&downloadRank=true&stars=true)](https://nodei.co/npm/worker-farm/) [![NPM](https://nodei.co/npm-dl/worker-farm.png?months=6&height=3)](https://nodei.co/npm/worker-farm/)
4
5
6Distribute processing tasks to child processes with an über-simple API and baked-in durability & custom concurrency options. *Available in npm as <strong>worker-farm</strong>*.
7
8## Example
9
10Given a file, *child.js*:
11
12```js
13module.exports = function (inp, callback) {
14 callback(null, inp + ' BAR (' + process.pid + ')')
15}
16```
17
18And a main file:
19
20```js
21var workerFarm = require('worker-farm')
22 , workers = workerFarm(require.resolve('./child'))
23 , ret = 0
24
25for (var i = 0; i < 10; i++) {
26 workers('#' + i + ' FOO', function (err, outp) {
27 console.log(outp)
28 if (++ret == 10)
29 workerFarm.end(workers)
30 })
31}
32```
33
34We'll get an output something like the following:
35
36```
37#1 FOO BAR (8546)
38#0 FOO BAR (8545)
39#8 FOO BAR (8545)
40#9 FOO BAR (8546)
41#2 FOO BAR (8548)
42#4 FOO BAR (8551)
43#3 FOO BAR (8549)
44#6 FOO BAR (8555)
45#5 FOO BAR (8553)
46#7 FOO BAR (8557)
47```
48
49This example is contained in the *[examples/basic](https://github.com/rvagg/node-worker-farm/tree/master/examples/basic/)* directory.
50
51### Example #1: Estimating π using child workers
52
53You will also find a more complex example in *[examples/pi](https://github.com/rvagg/node-worker-farm/tree/master/examples/pi/)* that estimates the value of **π** by using a Monte Carlo *area-under-the-curve* method and compares the speed of doing it all in-process vs using child workers to complete separate portions.
54
55Running `node examples/pi` will give you something like:
56
57```
58Doing it the slow (single-process) way...
59π ≈ 3.1416269360000006 (0.0000342824102075312 away from actual!)
60took 8341 milliseconds
61Doing it the fast (multi-process) way...
62π ≈ 3.1416233600000036 (0.00003070641021052367 away from actual!)
63took 1985 milliseconds
64```
65
66## Durability
67
68An important feature of Worker Farm is **call durability**. If a child process dies for any reason during the execution of call(s), those calls will be re-queued and taken care of by other child processes. In this way, when you ask for something to be done, unless there is something *seriously* wrong with what you're doing, you should get a result on your callback function.
69
70## My use-case
71
72There are other libraries for managing worker processes available but my use-case was fairly specific: I need to make heavy use of the [node-java](https://github.com/nearinfinity/node-java) library to interact with JVM code. Unfortunately, because the JVM garbage collector is so difficult to interact with, it's prone to killing your Node process when the GC kicks under heavy load. For safety I needed a durable way to make calls so that (1) it wouldn't kill my main process and (2) any calls that weren't successful would be resubmitted for processing.
73
74Worker Farm allows me to spin up multiple JVMs to be controlled by Node, and have a single, uncomplicated API that acts the same way as an in-process API and the calls will be taken care of by a child process even if an error kills a child process while it is working as the call will simply be passed to a new child process.
75
76**But**, don't think that Worker Farm is specific to that use-case, it's designed to be very generic and simple to adapt to anything requiring the use of child Node processes.
77
78## API
79
80Worker Farm exports a main function and an `end()` method. The main function sets up a "farm" of coordinated child-process workers and it can be used to instantiate multiple farms, all operating independently.
81
82### workerFarm([options, ]pathToModule[, exportedMethods])
83
84In its most basic form, you call `workerFarm()` with the path to a module file to be invoked by the child process. You should use an **absolute path** to the module file, the best way to obtain the path is with `require.resolve('./path/to/module')`, this function can be used in exactly the same way as `require('./path/to/module')` but it returns an absolute path.
85
86#### `exportedMethods`
87
88If your module exports a single function on `module.exports` then you should omit the final parameter. However, if you are exporting multiple functions on `module.exports` then you should list them in an Array of Strings:
89
90```js
91var workers = workerFarm(require.resolve('./mod'), [ 'doSomething', 'doSomethingElse' ])
92workers.doSomething(function () {})
93workers.doSomethingElse(function () {})
94```
95
96Listing the available methods will instruct Worker Farm what API to provide you with on the returned object. If you don't list a `exportedMethods` Array then you'll get a single callable function to use; but if you list the available methods then you'll get an object with callable functions by those names.
97
98**It is assumed that each function you call on your child module will take a `callback` function as the last argument.**
99
100#### `options`
101
102If you don't provide an `options` object then the following defaults will be used:
103
104```js
105{
106 maxCallsPerWorker : Infinity
107 , maxConcurrentWorkers : require('os').cpus().length
108 , maxConcurrentCallsPerWorker : 10
109 , maxConcurrentCalls : Infinity
110 , maxCallTime : Infinity
111 , maxRetries : Infinity
112 , autoStart : false
113}
114```
115
116 * **<code>maxCallsPerWorker</code>** allows you to control the lifespan of your child processes. A positive number will indicate that you only want each child to accept that many calls before it is terminated. This may be useful if you need to control memory leaks or similar in child processes.
117
118 * **<code>maxConcurrentWorkers</code>** will set the number of child processes to maintain concurrently. By default it is set to the number of CPUs available on the current system, but it can be any reasonable number, including `1`.
119
120 * **<code>maxConcurrentCallsPerWorker</code>** allows you to control the *concurrency* of individual child processes. Calls are placed into a queue and farmed out to child processes according to the number of calls they are allowed to handle concurrently. It is arbitrarily set to 10 by default so that calls are shared relatively evenly across workers, however if your calls predictably take a similar amount of time then you could set it to `Infinity` and Worker Farm won't queue any calls but spread them evenly across child processes and let them go at it. If your calls aren't I/O bound then it won't matter what value you use here as the individual workers won't be able to execute more than a single call at a time.
121
122 * **<code>maxConcurrentCalls</code>** allows you to control the maximum number of calls in the queue&mdash;either actively being processed or waiting for a worker to be processed. `Infinity` indicates no limit but if you have conditions that may endlessly queue jobs and you need to set a limit then provide a `>0` value and any calls that push the limit will return on their callback with a `MaxConcurrentCallsError` error (check `err.type == 'MaxConcurrentCallsError'`).
123
124 * **<code>maxCallTime</code>** *(use with caution, understand what this does before you use it!)* when `!== Infinity`, will cap a time, in milliseconds, that *any single call* can take to execute in a worker. If this time limit is exceeded by just a single call then the worker running that call will be killed and any calls running on that worker will have their callbacks returned with a `TimeoutError` (check `err.type == 'TimeoutError'`). If you are running with `maxConcurrentCallsPerWorker` value greater than `1` then **all calls currently executing** will fail and will be automatically resubmitted uless you've changed the `maxRetries` option. Use this if you have jobs that may potentially end in infinite loops that you can't programatically end with your child code. Preferably run this with a `maxConcurrentCallsPerWorker` so you don't interrupt other calls when you have a timeout. This timeout operates on a per-call basis but will interrupt a whole worker.
125
126 * **<code>maxRetries</code>** allows you to control the max number of call requeues after worker termination (unexpected or timeout). By default this option is set to `Infinity` which means that each call of each terminated worker will always be auto requeued. When the number of retries exceeds `maxRetries` value, the job callback will be executed with a `ProcessTerminatedError`. Note that if you are running with finite `maxCallTime` and `maxConcurrentCallsPerWorkers` greater than `1` then any `TimeoutError` will increase the retries counter *for each* concurrent call of the terminated worker.
127
128 * **<code>autoStart</code>** when set to `true` will start the workers as early as possible. Use this when your workers have to do expensive initialization. That way they'll be ready when the first request comes through.
129
130### workerFarm.end(farm)
131
132Child processes stay alive waiting for jobs indefinitely and your farm manager will stay alive managing its workers, so if you need it to stop then you have to do so explicitly. If you send your farm API to `workerFarm.end()` then it'll cleanly end your worker processes. Note though that it's a *soft* ending so it'll wait for child processes to finish what they are working on before asking them to die.
133
134Any calls that are queued and not yet being handled by a child process will be discarded. `end()` only waits for those currently in progress.
135
136Once you end a farm, it won't handle any more calls, so don't even try!
137
138## Related
139
140* [farm-cli](https://github.com/Kikobeats/farm-cli) – Launch a farm of workers from CLI.
141
142## License
143
144Worker Farm is Copyright (c) 2014 Rod Vagg [@rvagg](https://twitter.com/rvagg) and licensed under the MIT license. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE.md file for more details.