1 | # tape <sup>[![Version Badge][npm-version-svg]][package-url]</sup>
|
2 |
|
3 | tap-producing test harness for node and browsers
|
4 |
|
5 | [![github actions][actions-image]][actions-url]
|
6 | [![coverage][codecov-image]][codecov-url]
|
7 | [![dependency status][deps-svg]][deps-url]
|
8 | [![dev dependency status][dev-deps-svg]][dev-deps-url]
|
9 | [![License][license-image]][license-url]
|
10 | [![Downloads][downloads-image]][downloads-url]
|
11 |
|
12 | [![npm badge][npm-badge-png]][package-url]
|
13 |
|
14 | ![tape](https://web.archive.org/web/20170612184731if_/http://substack.net/images/tape_drive.png)
|
15 |
|
16 | # example
|
17 |
|
18 | ``` js
|
19 | var test = require('tape');
|
20 |
|
21 | test('timing test', function (t) {
|
22 | t.plan(2);
|
23 |
|
24 | t.equal(typeof Date.now, 'function');
|
25 | var start = Date.now();
|
26 |
|
27 | setTimeout(function () {
|
28 | t.equal(Date.now() - start, 100);
|
29 | }, 100);
|
30 | });
|
31 |
|
32 | test('test using promises', async function (t) {
|
33 | const result = await someAsyncThing();
|
34 | t.ok(result);
|
35 | });
|
36 | ```
|
37 |
|
38 | ```
|
39 | $ node example/timing.js
|
40 | TAP version 13
|
41 | # timing test
|
42 | ok 1 should be strictly equal
|
43 | not ok 2 should be strictly equal
|
44 | ---
|
45 | operator: equal
|
46 | expected: 100
|
47 | actual: 107
|
48 | ...
|
49 |
|
50 | 1..2
|
51 | # tests 2
|
52 | # pass 1
|
53 | # fail 1
|
54 | ```
|
55 |
|
56 | # usage
|
57 |
|
58 | You always need to `require('tape')` in test files. You can run the tests by usual node means (`require('test-file.js')` or `node test-file.js`).
|
59 | You can also run tests using the `tape` binary to utilize globbing, on Windows for example:
|
60 |
|
61 | ```sh
|
62 | $ tape tests/**/*.js
|
63 | ```
|
64 |
|
65 | `tape`'s arguments are passed to the [`glob`](https://www.npmjs.com/package/glob) module.
|
66 | If you want `glob` to perform the expansion on a system where the shell performs such expansion, quote the arguments as necessary:
|
67 |
|
68 | ```sh
|
69 | $ tape 'tests/**/*.js'
|
70 | $ tape "tests/**/*.js"
|
71 | ```
|
72 |
|
73 | ## Preloading modules
|
74 |
|
75 | Additionally, it is possible to make `tape` load one or more modules before running any tests, by using the `-r` or `--require` flag. Here's an example that loads [babel-register](http://babeljs.io/docs/usage/require/) before running any tests, to allow for JIT compilation:
|
76 |
|
77 | ```sh
|
78 | $ tape -r babel-register tests/**/*.js
|
79 | ```
|
80 |
|
81 | Depending on the module you're loading, you may be able to parameterize it using environment variables or auxiliary files. Babel, for instance, will load options from [`.babelrc`](http://babeljs.io/docs/usage/babelrc/) at runtime.
|
82 |
|
83 | The `-r` flag behaves exactly like node's `require`, and uses the same module resolution algorithm. This means that if you need to load local modules, you have to prepend their path with `./` or `../` accordingly.
|
84 |
|
85 | For example:
|
86 |
|
87 | ```sh
|
88 | $ tape -r ./my/local/module tests/**/*.js
|
89 | ```
|
90 |
|
91 | Please note that all modules loaded using the `-r` flag will run *before* any tests, regardless of when they are specified. For example, `tape -r a b -r c` will actually load `a` and `c` *before* loading `b`, since they are flagged as required modules.
|
92 |
|
93 | # things that go well with tape
|
94 |
|
95 | `tape` maintains a fairly minimal core. Additional features are usually added by using another module alongside `tape`.
|
96 |
|
97 | ## pretty reporters
|
98 |
|
99 | The default TAP output is good for machines and humans that are robots.
|
100 |
|
101 | If you want a more colorful / pretty output there are lots of modules on npm that will output something pretty if you pipe TAP into them:
|
102 |
|
103 | - [tap-spec](https://github.com/scottcorgan/tap-spec)
|
104 | - [tap-dot](https://github.com/scottcorgan/tap-dot)
|
105 | - [faucet](https://github.com/ljharb/faucet)
|
106 | - [tap-bail](https://github.com/juliangruber/tap-bail)
|
107 | - [tap-browser-color](https://github.com/kirbysayshi/tap-browser-color)
|
108 | - [tap-json](https://github.com/gummesson/tap-json)
|
109 | - [tap-min](https://github.com/derhuerst/tap-min)
|
110 | - [tap-nyan](https://github.com/calvinmetcalf/tap-nyan)
|
111 | - [tap-pessimist](https://www.npmjs.org/package/tap-pessimist)
|
112 | - [tap-prettify](https://github.com/toolness/tap-prettify)
|
113 | - [colortape](https://github.com/shuhei/colortape)
|
114 | - [tap-xunit](https://github.com/aghassemi/tap-xunit)
|
115 | - [tap-difflet](https://github.com/namuol/tap-difflet)
|
116 | - [tape-dom](https://github.com/gritzko/tape-dom)
|
117 | - [tap-diff](https://github.com/axross/tap-diff)
|
118 | - [tap-notify](https://github.com/axross/tap-notify)
|
119 | - [tap-summary](https://github.com/zoubin/tap-summary)
|
120 | - [tap-markdown](https://github.com/Hypercubed/tap-markdown)
|
121 | - [tap-html](https://github.com/gabrielcsapo/tap-html)
|
122 | - [tap-react-browser](https://github.com/mcnuttandrew/tap-react-browser)
|
123 | - [tap-junit](https://github.com/dhershman1/tap-junit)
|
124 | - [tap-nyc](https://github.com/MegaArman/tap-nyc)
|
125 | - [tap-spec (emoji patch)](https://github.com/Sceat/tap-spec-emoji)
|
126 | - [tape-repeater](https://github.com/rgruesbeck/tape-repeater)
|
127 | - [tabe](https://github.com/Josenzo/tabe)
|
128 |
|
129 | To use them, try `node test/index.js | tap-spec` or pipe it into one of the modules of your choice!
|
130 |
|
131 | ## uncaught exceptions
|
132 |
|
133 | By default, uncaught exceptions in your tests will not be intercepted, and will cause `tape` to crash. If you find this behavior undesirable, use [`tape-catch`](https://github.com/michaelrhodes/tape-catch) to report any exceptions as TAP errors.
|
134 |
|
135 | ## other
|
136 |
|
137 | - CoffeeScript support with https://www.npmjs.com/package/coffeetape
|
138 | - ES6 support with https://www.npmjs.com/package/babel-tape-runner or https://www.npmjs.com/package/buble-tape-runner
|
139 | - Different test syntax with https://github.com/pguth/flip-tape (warning: mutates String.prototype)
|
140 | - Electron test runner with https://github.com/tundrax/electron-tap
|
141 | - Concurrency support with https://github.com/imsnif/mixed-tape
|
142 | - In-process reporting with https://github.com/DavidAnson/tape-player
|
143 | - Describe blocks with https://github.com/mattriley/tape-describe
|
144 |
|
145 | # command-line flags
|
146 |
|
147 | While running tests, top-level configurations can be passed via the command line to specify desired behavior.
|
148 |
|
149 | Available configurations are listed below:
|
150 |
|
151 | ## --require
|
152 |
|
153 | **Alias**: `-r`
|
154 |
|
155 | This is used to load modules before running tests and is explained extensively in the [preloading modules](#preloading-modules) section.
|
156 |
|
157 | ## --ignore
|
158 |
|
159 | **Alias**: `-i`
|
160 |
|
161 | This flag is used when tests from certain folders and/or files are not intended to be run. It defaults to `.gitignore` file when passed with no argument.
|
162 |
|
163 | ```sh
|
164 | tape -i .ignore **/*.js
|
165 | ```
|
166 |
|
167 | An error is thrown if the specified file passed as argument does not exist.
|
168 |
|
169 | ## --no-only
|
170 | This is particularly useful in a CI environment where an [only test](#testonlyname-opts-cb) is not supposed to go unnoticed.
|
171 |
|
172 | By passing the `--no-only` flag, any existing [only test](#testonlyname-opts-cb) causes tests to fail.
|
173 |
|
174 | ```sh
|
175 | tape --no-only **/*.js
|
176 | ```
|
177 |
|
178 | Alternatively, the environment variable `NODE_TAPE_NO_ONLY_TEST` can be set to `true` to achieve the same behavior; the command-line flag takes precedence.
|
179 |
|
180 | # methods
|
181 |
|
182 | The assertion methods in `tape` are heavily influenced or copied from the methods in [node-tap](https://github.com/isaacs/node-tap).
|
183 |
|
184 | ```js
|
185 | var test = require('tape')
|
186 | ```
|
187 |
|
188 | ## test([name], [opts], cb)
|
189 |
|
190 | Create a new test with an optional `name` string and optional `opts` object.
|
191 | `cb(t)` fires with the new test object `t` once all preceding tests have finished.
|
192 | Tests execute serially.
|
193 |
|
194 | Available `opts` options are:
|
195 | - opts.skip = true/false. See test.skip.
|
196 | - opts.timeout = 500. Set a timeout for the test, after which it will fail. See test.timeoutAfter.
|
197 | - opts.objectPrintDepth = 5. Configure max depth of expected / actual object printing. Environmental variable `NODE_TAPE_OBJECT_PRINT_DEPTH` can set the desired default depth for all tests; locally-set values will take precedence.
|
198 | - opts.todo = true/false. Test will be allowed to fail.
|
199 |
|
200 | If you forget to `t.plan()` out how many assertions you are going to run and you don't call `t.end()` explicitly, or return a Promise that eventually settles, your test will hang.
|
201 |
|
202 | If `cb` returns a Promise, it will be implicitly awaited. If that promise rejects, the test will be failed; if it fulfills, the test will end. Explicitly calling `t.end()` while also returning a Promise that fulfills is an error.
|
203 |
|
204 | ## test.skip([name], [opts], cb)
|
205 |
|
206 | Generate a new test that will be skipped over.
|
207 |
|
208 | ## test.onFinish(fn)
|
209 |
|
210 | The onFinish hook will get invoked when ALL `tape` tests have finished right before `tape` is about to print the test summary.
|
211 |
|
212 | `fn` is called with no arguments, and its return value is ignored.
|
213 |
|
214 | ## test.onFailure(fn)
|
215 |
|
216 | The onFailure hook will get invoked whenever any `tape` tests has failed.
|
217 |
|
218 | `fn` is called with no arguments, and its return value is ignored.
|
219 |
|
220 | ## t.plan(n)
|
221 |
|
222 | Declare that `n` assertions should be run. `t.end()` will be called automatically after the `n`th assertion.
|
223 | If there are any more assertions after the `n`th, or after `t.end()` is called, they will generate errors.
|
224 |
|
225 | ## t.end(err)
|
226 |
|
227 | Declare the end of a test explicitly. If `err` is passed in `t.end` will assert that it is falsy.
|
228 |
|
229 | Do not call `t.end()` if your test callback returns a Promise.
|
230 |
|
231 | ## t.teardown(cb)
|
232 |
|
233 | Register a callback to run after the individual test has completed. Multiple registered teardown callbacks will run in order. Useful for undoing side effects, closing network connections, etc.
|
234 |
|
235 | ## t.fail(msg)
|
236 |
|
237 | Generate a failing assertion with a message `msg`.
|
238 |
|
239 | ## t.pass(msg)
|
240 |
|
241 | Generate a passing assertion with a message `msg`.
|
242 |
|
243 | ## t.timeoutAfter(ms)
|
244 |
|
245 | Automatically timeout the test after X ms.
|
246 |
|
247 | ## t.skip(msg)
|
248 |
|
249 | Generate an assertion that will be skipped over.
|
250 |
|
251 | ## t.ok(value, msg)
|
252 |
|
253 | Assert that `value` is truthy with an optional description of the assertion `msg`.
|
254 |
|
255 | Aliases: `t.true()`, `t.assert()`
|
256 |
|
257 | ## t.notOk(value, msg)
|
258 |
|
259 | Assert that `value` is falsy with an optional description of the assertion `msg`.
|
260 |
|
261 | Aliases: `t.false()`, `t.notok()`
|
262 |
|
263 | ## t.error(err, msg)
|
264 |
|
265 | Assert that `err` is falsy. If `err` is non-falsy, use its `err.message` as the description message.
|
266 |
|
267 | Aliases: `t.ifError()`, `t.ifErr()`, `t.iferror()`
|
268 |
|
269 | ## t.equal(actual, expected, msg)
|
270 |
|
271 | Assert that `Object.is(actual, expected)` with an optional description of the assertion `msg`.
|
272 |
|
273 | Aliases: `t.equals()`, `t.isEqual()`, `t.strictEqual()`, `t.strictEquals()`, `t.is()`
|
274 |
|
275 | ## t.notEqual(actual, expected, msg)
|
276 |
|
277 | Assert that `!Object.is(actual, expected)` with an optional description of the assertion `msg`.
|
278 |
|
279 | Aliases: `t.notEquals()`, `t.isNotEqual()`, `t.doesNotEqual()`, `t.isInequal()`, `t.notStrictEqual()`, `t.notStrictEquals()`, `t.isNot()`, `t.not()`
|
280 |
|
281 | ## t.looseEqual(actual, expected, msg)
|
282 |
|
283 | Assert that `actual == expected` with an optional description of the assertion `msg`.
|
284 |
|
285 | Aliases: `t.looseEquals()`
|
286 |
|
287 | ## t.notLooseEqual(actual, expected, msg)
|
288 |
|
289 | Assert that `actual != expected` with an optional description of the assertion `msg`.
|
290 |
|
291 | Aliases: `t.notLooseEquals()`
|
292 |
|
293 | ## t.deepEqual(actual, expected, msg)
|
294 |
|
295 | Assert that `actual` and `expected` have the same structure and nested values using [node's deepEqual() algorithm](https://github.com/inspect-js/node-deep-equal) with strict comparisons (`===`) on leaf nodes and an optional description of the assertion `msg`.
|
296 |
|
297 | Aliases: `t.deepEquals()`, `t.isEquivalent()`, `t.same()`
|
298 |
|
299 | ## t.notDeepEqual(actual, expected, msg)
|
300 |
|
301 | Assert that `actual` and `expected` do not have the same structure and nested values using [node's deepEqual() algorithm](https://github.com/inspect-js/node-deep-equal) with strict comparisons (`===`) on leaf nodes and an optional description of the assertion `msg`.
|
302 |
|
303 | Aliases: `t.notDeepEquals`, `t.notEquivalent()`, `t.notDeeply()`, `t.notSame()`,
|
304 | `t.isNotDeepEqual()`, `t.isNotDeeply()`, `t.isNotEquivalent()`,
|
305 | `t.isInequivalent()`
|
306 |
|
307 | ## t.deepLooseEqual(actual, expected, msg)
|
308 |
|
309 | Assert that `actual` and `expected` have the same structure and nested values using [node's deepEqual() algorithm](https://github.com/inspect-js/node-deep-equal) with loose comparisons (`==`) on leaf nodes and an optional description of the assertion `msg`.
|
310 |
|
311 | ## t.notDeepLooseEqual(actual, expected, msg)
|
312 |
|
313 | Assert that `actual` and `expected` do not have the same structure and nested values using [node's deepEqual() algorithm](https://github.com/inspect-js/node-deep-equal) with loose comparisons (`==`) on leaf nodes and an optional description of the assertion `msg`.
|
314 |
|
315 | Aliases: `t.notLooseEqual()`, `t.notLooseEquals()`
|
316 |
|
317 | ## t.throws(fn, expected, msg)
|
318 |
|
319 | Assert that the function call `fn()` throws an exception. `expected`, if present, must be a `RegExp`, `Function`, or `Object`. The `RegExp` matches the string representation of the exception, as generated by `err.toString()`. For example, if you set `expected` to `/user/`, the test will pass only if the string representation of the exception contains the word `user`. Any other exception will result in a failed test. The `Function` is the exception thrown (e.g. `Error`). `Object` in this case corresponds to a so-called validation object, in which each property is tested for strict deep equality. As an example, see the following two tests--each passes a validation object to `t.throws()` as the second parameter. The first test will pass, because all property values in the actual error object are deeply strictly equal to the property values in the validation object.
|
320 | ```
|
321 | const err = new TypeError("Wrong value");
|
322 | err.code = 404;
|
323 | err.check = true;
|
324 |
|
325 | // Passing test.
|
326 | t.throws(
|
327 | () => {
|
328 | throw err;
|
329 | },
|
330 | {
|
331 | code: 404,
|
332 | check: true
|
333 | },
|
334 | "Test message."
|
335 | );
|
336 | ```
|
337 | This next test will fail, because all property values in the actual error object are _not_ deeply strictly equal to the property values in the validation object.
|
338 | ```
|
339 | const err = new TypeError("Wrong value");
|
340 | err.code = 404;
|
341 | err.check = "true";
|
342 |
|
343 | // Failing test.
|
344 | t.throws(
|
345 | () => {
|
346 | throw err;
|
347 | },
|
348 | {
|
349 | code: 404,
|
350 | check: true // This is not deeply strictly equal to err.check.
|
351 | },
|
352 | "Test message."
|
353 | );
|
354 | ```
|
355 |
|
356 | This is very similar to how Node's `assert.throws()` method tests validation objects (please see the [Node _assert.throws()_ documentation](https://nodejs.org/api/assert.html#assert_assert_throws_fn_error_message) for more information).
|
357 |
|
358 | If `expected` is not of type `RegExp`, `Function`, or `Object`, or omitted entirely, any exception will result in a passed test. `msg` is an optional description of the assertion.
|
359 |
|
360 | Please note that the second parameter, `expected`, cannot be of type `string`. If a value of type `string` is provided for `expected`, then `t.throws(fn, expected, msg)` will execute, but the value of `expected` will be set to `undefined`, and the specified string will be set as the value for the `msg` parameter (regardless of what _actually_ passed as the third parameter). This can cause unexpected results, so please be mindful.
|
361 |
|
362 | ## t.doesNotThrow(fn, expected, msg)
|
363 |
|
364 | Assert that the function call `fn()` does not throw an exception. `expected`, if present, limits what should not be thrown, and must be a `RegExp` or `Function`. The `RegExp` matches the string representation of the exception, as generated by `err.toString()`. For example, if you set `expected` to `/user/`, the test will fail only if the string representation of the exception contains the word `user`. Any other exception will result in a passed test. The `Function` is the exception thrown (e.g. `Error`). If `expected` is not of type `RegExp` or `Function`, or omitted entirely, any exception will result in a failed test. `msg` is an optional description of the assertion.
|
365 |
|
366 | Please note that the second parameter, `expected`, cannot be of type `string`. If a value of type `string` is provided for `expected`, then `t.doesNotThrows(fn, expected, msg)` will execute, but the value of `expected` will be set to `undefined`, and the specified string will be set as the value for the `msg` parameter (regardless of what _actually_ passed as the third parameter). This can cause unexpected results, so please be mindful.
|
367 |
|
368 | ## t.test(name, [opts], cb)
|
369 |
|
370 | Create a subtest with a new test handle `st` from `cb(st)` inside the current test `t`. `cb(st)` will only fire when `t` finishes. Additional tests queued up after `t` will not be run until all subtests finish.
|
371 |
|
372 | You may pass the same options that [`test()`](#testname-opts-cb) accepts.
|
373 |
|
374 | ## t.comment(message)
|
375 |
|
376 | Print a message without breaking the tap output. (Useful when using e.g. `tap-colorize` where output is buffered & `console.log` will print in incorrect order vis-a-vis tap output.)
|
377 |
|
378 | Multiline output will be split by `\n` characters, and each one printed as a comment.
|
379 |
|
380 | ## t.match(string, regexp, message)
|
381 |
|
382 | Assert that `string` matches the RegExp `regexp`. Will fail when the first two arguments are the wrong type.
|
383 |
|
384 | ## t.doesNotMatch(string, regexp, message)
|
385 |
|
386 | Assert that `string` does not match the RegExp `regexp`. Will fail when the first two arguments are the wrong type.
|
387 |
|
388 | ## var htest = test.createHarness()
|
389 |
|
390 | Create a new test harness instance, which is a function like `test()`, but with a new pending stack and test state.
|
391 |
|
392 | By default the TAP output goes to `console.log()`. You can pipe the output to someplace else if you `htest.createStream().pipe()` to a destination stream on the first tick.
|
393 |
|
394 | ## test.only([name], [opts], cb)
|
395 |
|
396 | Like `test([name], [opts], cb)` except if you use `.only` this is the only test case that will run for the entire process, all other test cases using `tape` will be ignored.
|
397 |
|
398 | Check out how the usage of [the --no-only flag](#--no-only) could help ensure there is no `.only` test running in a specified environment.
|
399 |
|
400 | ## var stream = test.createStream(opts)
|
401 |
|
402 | Create a stream of output, bypassing the default output stream that writes messages to `console.log()`. By default `stream` will be a text stream of TAP output, but you can get an object stream instead by setting `opts.objectMode` to `true`.
|
403 |
|
404 | ### tap stream reporter
|
405 |
|
406 | You can create your own custom test reporter using this `createStream()` api:
|
407 |
|
408 | ``` js
|
409 | var test = require('tape');
|
410 | var path = require('path');
|
411 |
|
412 | test.createStream().pipe(process.stdout);
|
413 |
|
414 | process.argv.slice(2).forEach(function (file) {
|
415 | require(path.resolve(file));
|
416 | });
|
417 | ```
|
418 |
|
419 | You could substitute `process.stdout` for whatever other output stream you want, like a network connection or a file.
|
420 |
|
421 | Pass in test files to run as arguments:
|
422 |
|
423 | ```sh
|
424 | $ node tap.js test/x.js test/y.js
|
425 | TAP version 13
|
426 | # (anonymous)
|
427 | not ok 1 should be strictly equal
|
428 | ---
|
429 | operator: equal
|
430 | expected: "boop"
|
431 | actual: "beep"
|
432 | ...
|
433 | # (anonymous)
|
434 | ok 2 should be strictly equal
|
435 | ok 3 (unnamed assert)
|
436 | # wheee
|
437 | ok 4 (unnamed assert)
|
438 |
|
439 | 1..4
|
440 | # tests 4
|
441 | # pass 3
|
442 | # fail 1
|
443 | ```
|
444 |
|
445 | ### object stream reporter
|
446 |
|
447 | Here's how you can render an object stream instead of TAP:
|
448 |
|
449 | ``` js
|
450 | var test = require('tape');
|
451 | var path = require('path');
|
452 |
|
453 | test.createStream({ objectMode: true }).on('data', function (row) {
|
454 | console.log(JSON.stringify(row))
|
455 | });
|
456 |
|
457 | process.argv.slice(2).forEach(function (file) {
|
458 | require(path.resolve(file));
|
459 | });
|
460 | ```
|
461 |
|
462 | The output for this runner is:
|
463 |
|
464 | ```sh
|
465 | $ node object.js test/x.js test/y.js
|
466 | {"type":"test","name":"(anonymous)","id":0}
|
467 | {"id":0,"ok":false,"name":"should be strictly equal","operator":"equal","actual":"beep","expected":"boop","error":{},"test":0,"type":"assert"}
|
468 | {"type":"end","test":0}
|
469 | {"type":"test","name":"(anonymous)","id":1}
|
470 | {"id":0,"ok":true,"name":"should be strictly equal","operator":"equal","actual":2,"expected":2,"test":1,"type":"assert"}
|
471 | {"id":1,"ok":true,"name":"(unnamed assert)","operator":"ok","actual":true,"expected":true,"test":1,"type":"assert"}
|
472 | {"type":"end","test":1}
|
473 | {"type":"test","name":"wheee","id":2}
|
474 | {"id":0,"ok":true,"name":"(unnamed assert)","operator":"ok","actual":true,"expected":true,"test":2,"type":"assert"}
|
475 | {"type":"end","test":2}
|
476 | ```
|
477 |
|
478 | A convenient alternative to achieve the same:
|
479 | ```js
|
480 | // report.js
|
481 | var test = require('tape');
|
482 |
|
483 | test.createStream({ objectMode: true }).on('data', function (row) {
|
484 | console.log(JSON.stringify(row)) // for example
|
485 | });
|
486 | ```
|
487 | and then:
|
488 | ```sh
|
489 | $ tape -r ./report.js **/*.test.js
|
490 | ```
|
491 |
|
492 | # install
|
493 |
|
494 | With [npm](https://npmjs.org) do:
|
495 |
|
496 | ```sh
|
497 | npm install tape --save-dev
|
498 | ```
|
499 |
|
500 | # troubleshooting
|
501 |
|
502 | Sometimes `t.end()` doesn’t preserve the expected output ordering.
|
503 |
|
504 | For instance the following:
|
505 |
|
506 | ```js
|
507 | var test = require('tape');
|
508 |
|
509 | test('first', function (t) {
|
510 |
|
511 | setTimeout(function () {
|
512 | t.ok(1, 'first test');
|
513 | t.end();
|
514 | }, 200);
|
515 |
|
516 | t.test('second', function (t) {
|
517 | t.ok(1, 'second test');
|
518 | t.end();
|
519 | });
|
520 | });
|
521 |
|
522 | test('third', function (t) {
|
523 | setTimeout(function () {
|
524 | t.ok(1, 'third test');
|
525 | t.end();
|
526 | }, 100);
|
527 | });
|
528 | ```
|
529 |
|
530 | will output:
|
531 |
|
532 | ```
|
533 | ok 1 second test
|
534 | ok 2 third test
|
535 | ok 3 first test
|
536 | ```
|
537 |
|
538 | because `second` and `third` assume `first` has ended before it actually does.
|
539 |
|
540 | Use `t.plan()` instead to let other tests know they should wait:
|
541 |
|
542 | ```diff
|
543 | var test = require('tape');
|
544 |
|
545 | test('first', function (t) {
|
546 |
|
547 | + t.plan(2);
|
548 |
|
549 | setTimeout(function () {
|
550 | t.ok(1, 'first test');
|
551 | - t.end();
|
552 | }, 200);
|
553 |
|
554 | t.test('second', function (t) {
|
555 | t.ok(1, 'second test');
|
556 | t.end();
|
557 | });
|
558 | });
|
559 |
|
560 | test('third', function (t) {
|
561 | setTimeout(function () {
|
562 | t.ok(1, 'third test');
|
563 | t.end();
|
564 | }, 100);
|
565 | });
|
566 | ```
|
567 |
|
568 | # license
|
569 |
|
570 | MIT
|
571 |
|
572 | [package-url]: https://npmjs.org/package/tape
|
573 | [npm-version-svg]: https://versionbadg.es/ljharb/tape.svg
|
574 | [deps-svg]: https://david-dm.org/ljharb/tape.svg
|
575 | [deps-url]: https://david-dm.org/ljharb/tape
|
576 | [dev-deps-svg]: https://david-dm.org/ljharb/tape/dev-status.svg
|
577 | [dev-deps-url]: https://david-dm.org/ljharb/tape#info=devDependencies
|
578 | [npm-badge-png]: https://nodei.co/npm/tape.png?downloads=true&stars=true
|
579 | [license-image]: https://img.shields.io/npm/l/tape.svg
|
580 | [license-url]: LICENSE
|
581 | [downloads-image]: https://img.shields.io/npm/dm/tape.svg
|
582 | [downloads-url]: https://npm-stat.com/charts.html?package=tape
|
583 | [codecov-image]: https://codecov.io/gh/ljharb/tape/branch/master/graphs/badge.svg
|
584 | [codecov-url]: https://app.codecov.io/gh/ljharb/tape/
|
585 | [actions-image]: https://img.shields.io/endpoint?url=https://github-actions-badge-u3jn4tfpocch.runkit.sh/ljharb/tape
|
586 | [actions-url]: https://github.com/ljharb/tape/actions
|