1 | # zora
|
2 |
|
3 | Fast javascript testing library for **nodejs** and **browsers**
|
4 |
|
5 | [](https://circleci.com/gh/lorenzofox3/zora)
|
6 | [](https://www.npmjs.com/package/zora)
|
7 | [](https://packagephobia.now.sh/result?p=zora)
|
8 |
|
9 | [Gitlab mirror](https://gitlab.com/zora-test/zora)
|
10 |
|
11 | ## installation
|
12 |
|
13 | ``npm i --save-dev zora``
|
14 |
|
15 | Note that the version 3 of zora targets modern Javascript engines. Behind the scene it uses *Asynchronous iterators* and *for await* statement. Both
|
16 | are supported by Node (>= 10 or >= 8 with flag) and all the major browsers. If you wish to use the v2 you can find its code and documentation on the [v2 branch](https://github.com/lorenzofox3/zora/tree/v2).
|
17 |
|
18 | ## (Un)Opinions and Design
|
19 |
|
20 | These are the following rules and ideas I have followed while developing zora. Whether they are right or not is an entire different topic ! :D
|
21 | Note I have decided to develop zora specially because I was not able to find a tool which complies entirely with these ideas.
|
22 |
|
23 | [read more](https://dev.to/lorenzofox3/tools-and-the-design-of-a-testing-experience-2mdc) on how it fits in the [UNIX philosophy](https://en.wikipedia.org/wiki/Unix_philosophy)
|
24 |
|
25 | ### Tests are regular Javascript programs.
|
26 |
|
27 | You don't need a specific test runner, a specific platform or any build step to run your `zora` tests. They are only regular valid EcmaScript 2018 programs.
|
28 | If you have the following test.
|
29 | ```Javascript
|
30 | import {test} from 'path/to/zora';
|
31 |
|
32 | test('should result to the answer', t => {
|
33 | const answer = 42
|
34 | t.equal(answer, 42, 'answer should be 42');
|
35 | });
|
36 | ```
|
37 |
|
38 | You can run your test with
|
39 | 1. Node: ``node ./myTestFile.js``
|
40 | 2. In the browser ``<script type="module" src="./myTestFile.js></script>`` identically
|
41 |
|
42 | Moreover zora does not use specific platform API which should make it transparent to most of your tools such module bundlers or transpilers.
|
43 |
|
44 | In few words:
|
45 | > Zora is EcmaScript, no less, no more.
|
46 |
|
47 | ### Tests are fast
|
48 |
|
49 | Tests are part of our daily routine as software developers. Performance is part of the user experience and there is no reason you should wait seconds for your tests to run.
|
50 | Zora is by far the **fastest** Javascript test runner in the ecosystem.
|
51 |
|
52 | #### Benchmark
|
53 |
|
54 | This repository includes a benchmark which consists on running N test files, with M tests in each and where one test lasts T milliseconds.
|
55 | About 5% of tests should fail.
|
56 |
|
57 | 1. profile library: N = 5, M = 8, T = 25ms
|
58 | 2. profile web app: N = 10, M = 8, T = 40ms
|
59 | 3. profile api: N =12, M = 10, T = 100ms
|
60 |
|
61 | Each framework runs with its default settings.
|
62 |
|
63 | Here are the result of different test frameworks on my developer machine (MacBook Pro, 2.7GH i5) with node 12 :
|
64 |
|
65 | | | zora@3.1.0 | pta@0.1.0 | tape@4.11.2 | Jest@24.9.0 | AvA@2.4.0 | Mocha@6.2.1|
|
66 | |--------|:------------:|:------------:|:------------:|:-------------:|:------------:|:----------:|
|
67 | |Library | 102ms | 231ms | 1240ms | 2835ms | 1888ms | 1349ms |
|
68 | |Web app | 134ms | 278ms | 3523ms | 4084ms | 2900ms | 3696ms |
|
69 | |API | 187ms | 331ms | 12586ms | 7380ms | 3900ms | 12766ms |
|
70 |
|
71 | Of course as any benchmark, it may not cover your use case and you should probably run your own tests before you draw any conclusion.
|
72 |
|
73 | ### Focus on tests only
|
74 |
|
75 | zora does one thing but hopefully does it well: **test**.
|
76 |
|
77 | In my opinions:
|
78 | 1. Pretty reporting (I have not said *efficient reporting*) should be handled by a specific tool.
|
79 | 2. Transpilation and other code transformation should be handled by a specific tool.
|
80 | 3. File watching and caching should be handled by a specific tool.
|
81 | 4. File serving should be handled by a specific tool.
|
82 | 5. Coffee should be made by a specific tool.
|
83 |
|
84 | As a result zora is much smaller of an install according to [packagephobia](https://packagephobia.now.sh) than all the others test frameworks
|
85 |
|
86 | | | zora | pta |tape | Jest | AvA | Mocha|
|
87 | |--------|:------------:|:------------:|:-----------:|:-------------:|:------------:|:------------:|
|
88 | |Install size | [](https://packagephobia.now.sh/result?p=zora) | [](https://packagephobia.now.sh/result?p=pta) | [](https://packagephobia.now.sh/result?p=tape) | [](https://packagephobia.now.sh/result?p=jest) | [](https://packagephobia.now.sh/result?p=ava) | [](https://packagephobia.now.sh/result?p=mocha) |
|
89 |
|
90 | ### Reporting is handled with another process (TAP aware)
|
91 |
|
92 | When you run a test you usually want to know whether there is any failure, where and why in order to debug and solve the issue as fast as possible.
|
93 | Whether you want it to be printed in red, yellow etc is a matter of preference.
|
94 |
|
95 | For this reason, zora output [TAP](http://testanything.org/) (Test Anything Protocol) by default. This protocol is "machine friendly" and widely used: [there are plenty of tools](https://github.com/sindresorhus/awesome-tap) to parse and deal with it the way **you** want.
|
96 |
|
97 | ## Usage
|
98 |
|
99 | ### Basics
|
100 |
|
101 | You can use the top level assertion methods
|
102 |
|
103 | ```Javascript
|
104 | import {equal, ok, isNot} from 'zora';
|
105 |
|
106 | ok(true,'true is truthy');
|
107 |
|
108 | equal('bar','bar', 'that both string are equivalent');
|
109 |
|
110 | isNot({},{},'those are not the same reference');
|
111 |
|
112 | //etc
|
113 | ```
|
114 |
|
115 | If you run the previous program, test report will start on its own by default with the following console output:
|
116 |
|
117 | <details>
|
118 | <summary>output.txt</summary>
|
119 |
|
120 | ```TAP
|
121 | TAP version 13
|
122 | ok 1 - true is truthy
|
123 | ok 2 - that both string are equivalent
|
124 | ok 3 - those are not the same reference
|
125 | 1..3
|
126 |
|
127 | # ok
|
128 | # success: 3
|
129 | # skipped: 0
|
130 | # failure: 0
|
131 | ```
|
132 |
|
133 | </details>
|
134 |
|
135 | However one will usually want to group assertions within a sub test: the ``test`` method can be used.
|
136 |
|
137 | ```Javascript
|
138 | import {test} from 'zora';
|
139 |
|
140 | test('some grouped assertions', t => {
|
141 | t.ok(true, 'true is truthy');
|
142 | t.equal('bar', 'bar', 'that both string are equivalent');
|
143 | t.isNot({}, {}, 'those are not the same reference');
|
144 | });
|
145 | ```
|
146 |
|
147 | with the following result
|
148 |
|
149 | <details>
|
150 | <summary>output.txt</summary>
|
151 |
|
152 | ```TAP
|
153 | TAP version 13
|
154 | # some grouped assertions
|
155 | ok 1 - true is truthy
|
156 | ok 2 - that both string are equivalent
|
157 | ok 3 - those are not the same reference
|
158 | 1..3
|
159 |
|
160 | # ok
|
161 | # success: 3
|
162 | # skipped: 0
|
163 | # failure: 0
|
164 | ```
|
165 |
|
166 | </details>
|
167 |
|
168 | You can also group tests within a parent test:
|
169 |
|
170 | ```Javascript
|
171 | import {test} from 'zora';
|
172 |
|
173 | test('some grouped assertions', t => {
|
174 | t.ok(true, 'true is truthy');
|
175 |
|
176 | t.test('a group inside another one', t=>{
|
177 | t.equal('bar', 'bar', 'that both string are equivalent');
|
178 | t.isNot({}, {}, 'those are not the same reference');
|
179 | });
|
180 | });
|
181 | ```
|
182 | <details>
|
183 | <summary>output.txt</summary>
|
184 |
|
185 | ```TAP
|
186 | TAP version 13
|
187 | # some grouped assertions
|
188 | ok 1 - true is truthy
|
189 | # a group inside another one
|
190 | ok 2 - that both string are equivalent
|
191 | ok 3 - those are not the same reference
|
192 | 1..3
|
193 |
|
194 | # ok
|
195 | # success: 3
|
196 | # skipped: 0
|
197 | # failure: 0
|
198 | ```
|
199 | </details>
|
200 |
|
201 | ### Asynchronous tests and control flow
|
202 |
|
203 | Asynchronous tests are simply handled with async function:
|
204 |
|
205 | ```Javascript
|
206 | test('with getUsers an asynchronous function returning a Promise',async t => {
|
207 | const users = await getUsers();
|
208 | t.eq(users.length, 2,'we should have 2 users');
|
209 | });
|
210 | ```
|
211 |
|
212 | Notice that each test runs in its own micro task in parallel (for performance). It implies your tests should not depend on each other.
|
213 | It is often a good practice!
|
214 | However, you'll be able to group your tests if you wish to conserve some state between them or wait one to finish before you start another one (ideal with tests running against real database).
|
215 |
|
216 | The sequence is simply controlled by AsyncFunction (and await keyboard), the ``test`` function return the result of its spec function argument, so you can control whether you want a specific test to complete before moving on
|
217 |
|
218 | ```Javascript
|
219 | let state = 0;
|
220 |
|
221 | test('test 1', t => {
|
222 | t.ok(true);
|
223 | state++;
|
224 | });
|
225 |
|
226 | test('test 2', t => {
|
227 | //Maybe yes maybe no, you have no guarantee ! In this case it will work as everything is sync
|
228 | t.equal(state, 1);
|
229 | });
|
230 |
|
231 | //Same thing here even in nested tests
|
232 | test('grouped', t => {
|
233 | let state = 0;
|
234 |
|
235 | t.test('test 1', t => {
|
236 | t.ok(true);
|
237 | state++;
|
238 | });
|
239 |
|
240 | t.test('test 2', t => {
|
241 | //Maybe yes maybe no, you have no guarantee ! In this case it will work as everything is sync
|
242 | t.equal(state, 1);
|
243 | });
|
244 | });
|
245 |
|
246 | //And
|
247 | test('grouped', t=>{
|
248 | let state = 0;
|
249 |
|
250 | t.test('test 1', async t=>{
|
251 | t.ok(true);
|
252 | await wait(100);
|
253 | state++;
|
254 | });
|
255 |
|
256 | test('test 2', t=>{
|
257 | t.equal(state, 0, 'see the old state value as it will have started to run before test 1 is done');
|
258 | });
|
259 | });
|
260 |
|
261 | //But
|
262 | test('grouped', async t => {
|
263 | let state = 0;
|
264 |
|
265 | //specifically wait the end of this test before continuing !
|
266 | await t.test('test 1', async t => {
|
267 | t.ok(true);
|
268 | await wait(100);
|
269 | state++;
|
270 | });
|
271 |
|
272 | test('test 2', t => {
|
273 | t.equal(state, 1, 'see the updated value!');
|
274 | });
|
275 | });
|
276 | ```
|
277 |
|
278 | ### Changing TAP format
|
279 |
|
280 | TAP protocol is loosely defined in the sense that diagnostic is quite a free space and there is no well defined format to explicit a sub tests.
|
281 | In Javascript community most of the TAP parsers and tools were designed for [tape](https://github.com/substack/tape) which implies a TAP comment for a sub test header and every assertion is on the same level.
|
282 | In the same way these aforementioned tools expect diagnostics with a ``expected``, ``actual``, etc properties
|
283 | It is the one we have used in our previous examples.
|
284 |
|
285 | If you run the following program
|
286 | ```Javascript
|
287 | import {test} from 'zora';
|
288 |
|
289 | test('tester 1', t => {
|
290 |
|
291 | t.ok(true, 'assert1');
|
292 |
|
293 | t.test('some nested tester', t => {
|
294 | t.ok(true, 'nested 1');
|
295 | t.ok(true, 'nested 2');
|
296 | });
|
297 |
|
298 | t.test('some nested tester bis', t => {
|
299 | t.ok(true, 'nested 1');
|
300 |
|
301 | t.test('deeply nested', t => {
|
302 | t.ok(true, 'deeply nested really');
|
303 | t.ok(true, 'deeply nested again');
|
304 | });
|
305 |
|
306 | t.notOk(true, 'nested 2'); // This one will fail
|
307 | });
|
308 |
|
309 | t.ok(true, 'assert2');
|
310 | });
|
311 |
|
312 | test('tester 2', t => {
|
313 | t.ok(true, 'assert3');
|
314 |
|
315 | t.test('nested in two', t => {
|
316 | t.ok(true, 'still happy');
|
317 | });
|
318 |
|
319 | t.ok(true, 'assert4');
|
320 | });
|
321 | ```
|
322 | <details>
|
323 | <summary>output.txt</summary>
|
324 |
|
325 | ```TAP
|
326 | TAP version 13
|
327 | # tester 1
|
328 | ok 1 - assert1
|
329 | # some nested tester
|
330 | ok 2 - nested 1
|
331 | ok 3 - nested 2
|
332 | # some nested tester bis
|
333 | ok 4 - nested 1
|
334 | # deeply nested
|
335 | ok 5 - deeply nested really
|
336 | ok 6 - deeply nested again
|
337 | not ok 7 - nested 2
|
338 | ---
|
339 | actual: true
|
340 | expected: "falsy value"
|
341 | operator: "notOk"
|
342 | at: " t.test.t (/Volumes/Data/code/zora/test/samples/cases/nested.js:20:11)"
|
343 | ...
|
344 | ok 8 - assert2
|
345 | # tester 2
|
346 | ok 9 - assert3
|
347 | # nested in two
|
348 | ok 10 - still happy
|
349 | ok 11 - assert4
|
350 | 1..11
|
351 |
|
352 | # not ok
|
353 | # success: 10
|
354 | # skipped: 0
|
355 | # failure: 1
|
356 | ```
|
357 |
|
358 | </details>
|
359 |
|
360 | Another common structure is the one used by [node-tap](http://node-tap.org/). The structure can be parsed with common tap parser (such as [tap-parser](https://github.com/tapjs/tap-parser)) And will be parsed as well by tap parser which
|
361 | do not understand the indentation. However to take full advantage of the structure you should probably use a formatter (such [tap-mocha-reporter](https://www.npmjs.com/package/tap-mocha-reporter)) aware of this specific structure to get the whole benefit
|
362 | of the format.
|
363 |
|
364 | 
|
365 |
|
366 | You can ask zora to indent sub tests with configuration flag:
|
367 | 1. setting node environment variable ``INDENT=true node ./path/to/test/program`` if you run the test program with node
|
368 | 2. setting a global variable on the window object if you use the browser to run the test program
|
369 | ```markup
|
370 | <script>INDENT=true;</script>
|
371 | <script src="path/to/test/program></script>
|
372 | ```
|
373 |
|
374 | ```Javascript
|
375 | const {test} = require('zora.js');
|
376 |
|
377 | test('tester 1', t => {
|
378 |
|
379 | t.ok(true, 'assert1');
|
380 |
|
381 | t.test('some nested tester', t => {
|
382 | t.ok(true, 'nested 1');
|
383 | t.ok(true, 'nested 2');
|
384 | });
|
385 |
|
386 | t.test('some nested tester bis', t => {
|
387 | t.ok(true, 'nested 1');
|
388 |
|
389 | t.test('deeply nested', t => {
|
390 | t.ok(true, 'deeply nested really');
|
391 | t.ok(true, 'deeply nested again');
|
392 | });
|
393 |
|
394 | t.notOk(true, 'nested 2'); // This one will fail
|
395 | });
|
396 |
|
397 | t.ok(true, 'assert2');
|
398 | });
|
399 |
|
400 | test('tester 2', t => {
|
401 | t.ok(true, 'assert3');
|
402 |
|
403 | t.test('nested in two', t => {
|
404 | t.ok(true, 'still happy');
|
405 | });
|
406 |
|
407 | t.ok(true, 'assert4');
|
408 | });
|
409 | ```
|
410 |
|
411 | <details>
|
412 | <summary>output.txt</summary>
|
413 |
|
414 | ```TAP
|
415 | TAP version 13
|
416 | # Subtest: tester 1
|
417 | ok 1 - assert1
|
418 | # Subtest: some nested tester
|
419 | ok 1 - nested 1
|
420 | ok 2 - nested 2
|
421 | 1..2
|
422 | ok 2 - some nested tester # 1ms
|
423 | # Subtest: some nested tester bis
|
424 | ok 1 - nested 1
|
425 | # Subtest: deeply nested
|
426 | ok 1 - deeply nested really
|
427 | ok 2 - deeply nested again
|
428 | 1..2
|
429 | ok 2 - deeply nested # 1ms
|
430 | not ok 3 - nested 2
|
431 | ---
|
432 | wanted: "falsy value"
|
433 | found: true
|
434 | at: " t.test.t (/Volumes/Data/code/zora/test/samples/cases/nested.js:22:11)"
|
435 | operator: "notOk"
|
436 | ...
|
437 | 1..3
|
438 | not ok 3 - some nested tester bis # 1ms
|
439 | ok 4 - assert2
|
440 | 1..4
|
441 | not ok 1 - tester 1 # 1ms
|
442 | # Subtest: tester 2
|
443 | ok 1 - assert3
|
444 | # Subtest: nested in two
|
445 | ok 1 - still happy
|
446 | 1..1
|
447 | ok 2 - nested in two # 0ms
|
448 | ok 3 - assert4
|
449 | 1..3
|
450 | ok 2 - tester 2 # 0ms
|
451 | 1..2
|
452 |
|
453 | # not ok
|
454 | # success: 10
|
455 | # skipped: 0
|
456 | # failure: 1
|
457 | ```
|
458 |
|
459 | </details>
|
460 |
|
461 | ### Skip a test
|
462 |
|
463 | You can decide to skip some tests if you wish not to run them, in that case they will be considered as _passing_. However the assertion summary at the end will tell you that some tests have been skipped
|
464 | and each skipped test will have a tap skip directive.
|
465 |
|
466 | ```Javascript
|
467 | import {ok, skip, test} from 'zora';
|
468 |
|
469 | ok(true, 'hey hey');
|
470 | ok(true, 'hey hey bis');
|
471 |
|
472 | test('hello world', t => {
|
473 | t.ok(true);
|
474 | t.skip('blah', t => {
|
475 | t.ok(false);
|
476 | });
|
477 | t.skip('for some reason');
|
478 | });
|
479 |
|
480 | skip('failing text', t => {
|
481 | t.ok(false);
|
482 | });
|
483 | ```
|
484 |
|
485 | <details>
|
486 | <summary>output.txt</summary>
|
487 |
|
488 | ```TAP
|
489 | TAP version 13
|
490 | ok 1 - hey hey
|
491 | ok 2 - hey hey bis
|
492 | # hello world
|
493 | ok 3 - should be truthy
|
494 | # blah
|
495 | ok 4 - blah # SKIP
|
496 | # for some reason
|
497 | ok 5 - for some reason # SKIP
|
498 | # failing text
|
499 | ok 6 - failing text # SKIP
|
500 | 1..6
|
501 |
|
502 | # ok
|
503 | # success: 3
|
504 | # skipped: 3
|
505 | # failure: 0
|
506 | ```
|
507 |
|
508 | </details>
|
509 |
|
510 | ### Run only some tests
|
511 |
|
512 | While developing, you may want to only run some tests. You can do so by using the ``only`` function. If the test you want to run has
|
513 | some sub tests, you will also have to call ``assertion.only`` to make a given sub test run.
|
514 | You will also have to set the ``RUN_ONLY`` flag to ``true`` (in the same way as ``INDENT``). ``only`` is a convenience
|
515 | for a developer while working, it has not real meaning for the testing program, so if you use only in the testing program and run it without the RUN_ONLY mode, it will bailout.
|
516 |
|
517 | ```javascript
|
518 | test('should not run', t => {
|
519 | t.fail('I should not run ');
|
520 | });
|
521 |
|
522 | only('should run', t => {
|
523 | t.ok(true, 'I ran');
|
524 |
|
525 | t.only('keep running', t => {
|
526 | t.only('keeeeeep running', t => {
|
527 | t.ok(true, ' I got there');
|
528 | });
|
529 | });
|
530 |
|
531 | t.test('should not run', t => {
|
532 | t.fail('shouldn ot run');
|
533 | });
|
534 | });
|
535 |
|
536 | only('should run but nothing inside', t => {
|
537 | t.test('will not run', t => {
|
538 | t.fail('should not run');
|
539 | });
|
540 | t.test('will not run', t => {
|
541 | t.fail('should not run');
|
542 | });
|
543 | });
|
544 | ```
|
545 |
|
546 | If you run the following program with node ``RUN_ONLY node ./path/to/program.js``, you will get the following output:
|
547 |
|
548 | <details>
|
549 | <summary>output.txt</summary>
|
550 |
|
551 | ```tap
|
552 | TAP version 13
|
553 | # should not run
|
554 | ok 1 - should not run # SKIP
|
555 | # should run
|
556 | ok 2 - I ran
|
557 | # keep running
|
558 | # keeeeeep running
|
559 | ok 3 - I got there
|
560 | # should not run
|
561 | ok 4 - should not run # SKIP
|
562 | # should run but nothing inside
|
563 | # will not run
|
564 | ok 5 - will not run # SKIP
|
565 | # will not run
|
566 | ok 6 - will not run # SKIP
|
567 | 1..6
|
568 |
|
569 | # ok
|
570 | # success: 2
|
571 | # skipped: 4
|
572 | # failure: 0
|
573 | ```
|
574 |
|
575 | </details>
|
576 |
|
577 |
|
578 | ### Assertion API
|
579 |
|
580 | - equal<T>(actual: T, expected: T, message?: string) verify if two values/instances are equivalent. It is often described as *deepEqual* in assertion libraries.
|
581 | aliases: eq, equals, deepEqual
|
582 | - notEqual<T>(actual: T, expected: T, message?: string) opposite of equal.
|
583 | aliases: notEquals, notEq, notDeepEqual
|
584 | - is<T>(actual: T, expected: T, message ?: string) verify whether two instances are the same (basically it is Object.is)
|
585 | aliases: same
|
586 | - isNot<T>(actual: T, expected: T, message ?: string)
|
587 | aliases: notSame
|
588 | - ok<T>(actual: T, message?: string) verify whether a value is truthy
|
589 | aliases: truthy
|
590 | - notOk<T>(actual: T, message?:string) verify whether a value is falsy
|
591 | aliases: falsy
|
592 | - fail(message?:string) an always failing test, usually when you want a branch of code not to be traversed
|
593 | - throws(fn: Function, expected?: string | RegExp | Function, description ?: string) expect an error to be thrown, you check the expected error by Regexp, Constructor or name
|
594 | - doesNotThrow(fn: Function, expected?: string | RegExp | Function, description ?: string) expect an error not to be thrown, you check the expected error by Regexp, Constructor or name
|
595 |
|
596 | ### Create manually a test harness
|
597 |
|
598 | You can discard the default test harness and create your own. This has various effect:
|
599 | - the reporting won't start automatically, you will have to trigger it yourself but it also lets you know when the reporting is over
|
600 | - you can pass a custom reporter. Zora produces a stream of messages which are then transformed into a TAP stream. If you create the test harness yourself
|
601 | you can directly pass your custom reporter to transform the raw messages stream.
|
602 |
|
603 | ```Javascript
|
604 | const {createHarness, mochaTapLike} = require('zora');
|
605 |
|
606 | const harness = createHarness();
|
607 | const {test} = harness;
|
608 |
|
609 | test('a first sub test', t => {
|
610 | t.ok(true);
|
611 |
|
612 | t.test('inside', t => {
|
613 | t.ok(true);
|
614 | });
|
615 | });
|
616 |
|
617 | test('a first sub test', t => {
|
618 | t.ok(true);
|
619 |
|
620 | t.test('inside', t => {
|
621 | t.ok(false, 'oh no!');
|
622 | });
|
623 | });
|
624 |
|
625 | harness
|
626 | .report(mochaTapLike) // we have passed the mochaTapLike (with indention but here you can pass whatever you want
|
627 | .then(() => {
|
628 | // reporting is over: we can release some pending resources
|
629 | console.log('DONE !');
|
630 | // or in this case, our test program is for node so we want to set the exit code ourselves in case of failing test.
|
631 | const exitCode = harness.pass === true ? 0 : 1;
|
632 | process.exit(exitCode);
|
633 | });
|
634 | ```
|
635 |
|
636 | In practice you won't use this method unless you have specific requirements or want to build your own test runner on top of zora.
|
637 |
|
638 | ## Nodejs test runner
|
639 |
|
640 | If you want a little bit more opiniated test runner based on zora you can check [pta](https://github.com/lorenzofox3/zora-node)
|
641 |
|
642 | ## In the browser
|
643 |
|
644 | Zora itself does not depend on native Nodejs modules (such file system, processes, etc) so the code you will get is regular EcmaScript.
|
645 |
|
646 | ### drop in file
|
647 | You can simply drop the dist file in the browser and write your script below (or load it).
|
648 | You can for example play with this [codepen](https://codepen.io/lorenzofox3/pen/YBWJrJ)
|
649 |
|
650 | ```Html
|
651 | <!-- some content -->
|
652 | <body>
|
653 | <script type="module">
|
654 |
|
655 | import test from 'path/to/zora';
|
656 |
|
657 | test('some test', (assert) => {
|
658 | assert.ok(true, 'hey there');
|
659 | })
|
660 |
|
661 | test('some failing test', (assert) => {
|
662 | assert.fail('it failed');
|
663 | })
|
664 | </script>
|
665 | </body>
|
666 | <!-- some content -->
|
667 | ```
|
668 |
|
669 | ### As part of CI (example with rollup)
|
670 |
|
671 | I will use [rollup](http://rollupjs.org/) for this example, but you should not have any problem with [webpack](https://webpack.github.io/) or [browserify](http://browserify.org/). The idea is simply to create a test file your testing browsers will be able to run.
|
672 |
|
673 | assuming you have your entry point as follow :
|
674 | ```Javascript
|
675 | //./test/index.js
|
676 | import test1 from './test1.js'; // some tests here
|
677 | import test2 from './test2.js'; // some more tests there
|
678 | import test3 from './test3.js'; // another test plan
|
679 | ```
|
680 |
|
681 | where for example ./test/test1.js is
|
682 | ```Javascript
|
683 | import test from 'zora';
|
684 |
|
685 | test('mytest', (assertions) => {
|
686 | assertions.ok(true);
|
687 | })
|
688 |
|
689 | test('mytest', (assertions) => {
|
690 | assertions.ok(true);
|
691 | });
|
692 | ```
|
693 | you can then bundle your test as single program.
|
694 |
|
695 | ```Javascript
|
696 | const node = require('rollup-plugin-node-resolve');
|
697 | const commonjs = require('rollup-plugin-commonjs');
|
698 | module.exports = {
|
699 | input: './test/index.js',
|
700 | output: [{
|
701 | name: 'test',
|
702 | format: 'iife',
|
703 | sourcemap: 'inline' // ideal to debug
|
704 | }],
|
705 | plugins: [node(), commonjs()], //you can add babel plugin if you need transpilation
|
706 | };
|
707 | ```
|
708 |
|
709 | You can now drop the result into a debug file
|
710 | ``rollup -c path/to/conf > debug.js``
|
711 |
|
712 | And read with your browser (from an html document for example).
|
713 |
|
714 | 
|
715 |
|
716 | Even better, you can use tap reporter browser friendly such [tape-run](https://www.npmjs.com/package/tape-run) so you'll have a proper exit code depending on the result of your tests.
|
717 |
|
718 | so all together, in your package.json you can have something like that
|
719 | ```Javascript
|
720 | {
|
721 | // ...
|
722 | "scripts": {
|
723 | "test:ci": "rollup -c path/to/conf | tape-run"
|
724 | }
|
725 | // ...
|
726 | }
|
727 | ```
|
728 |
|
729 | ## On exit codes
|
730 |
|
731 | Whether you have failing tests or not, unless you have an unexpected error, the process will return an exit code 0: zora considers its duty is to run the program to its end whether there is failing test or no.
|
732 | Often CI platforms require an exit code of 1 to mark a build as failed. That is not an issue, there are plenty of TAP reporters which when parsing a TAP stream will exit the process with code 1 if they encounter a failing test.
|
733 | Hence you'll need to pipe zora output into one of those reporters to avoid false positive on your CI platform.
|
734 |
|
735 | For example, one of package.json script can be
|
736 | ``"test:ci": npm test | tap-set-exit``
|
737 |
|
738 | ## Contributing
|
739 |
|
740 | 1. Clone the repository with git ``git https://github.com/lorenzofox3/zora.git`` (or from Github/Gitlab UI)
|
741 | 2. install dependencies ``npm i``
|
742 | 3. build the source files ``npm run build``. Alternatively, if you are under "heavy" development you can run ``npm run dev`` it will build source files on every change
|
743 | 4. run the tests ``npm t``
|