1 |
|
2 |
|
3 | ## Testing non-FBP programs
|
4 |
|
5 | fbp-spec requires that the *tests* are expressed as FBP programs,
|
6 | however the program-under-test can be anything - as long as it can be accessed
|
7 | from a FBP runtime.
|
8 | For instance, one could have a test fixture which excersises
|
9 | a commandline-program, an HTTP service, a library API (possibly using FFI),
|
10 | or even some hardware device.
|
11 |
|
12 | NoFlo is general purpose, and would be suitable for writing such tests.
|
13 |
|
14 | TODO: write a simple example testing a non-FBP program
|
15 |
|
16 | ## From failure to regression-test
|
17 |
|
18 | Many issues are found underway during development, or after a version of the software is in use in production.
|
19 | To get to high-quality software fast, we'd like to minimize the time spent on going from a failure to fixed software,
|
20 | including a testcase which ensures the problem is, and continues to be, fixed.
|
21 |
|
22 | Process:
|
23 |
|
24 | * Spot the problem. Can be manual or automated, by monitoring services/tools.
|
25 | * Get the input data that triggered it. Can for instance come from a Flowtrace.
|
26 | * Create a testcase for this. fbp-spec in our case
|
27 | * Verify testcase reproduces problem
|
28 | * Create a minimal testcase from original
|
29 |
|
30 | References:
|
31 |
|
32 | > The DeltaDebugging algorithm generalizes and simplifies some failing test case
|
33 | > to a minimal test case that still produces the failure;
|
34 | > it also isolates the difference between a passing and a failing test case.
|
35 | [Simplifying and isolating failure-inducing input](https://www.st.cs.uni-saarland.de/papers/tse2002/tse2002.pdf)
|
36 |
|
37 |
|
38 | ## Generative testing
|
39 |
|
40 | ### Fuzzing of input data
|
41 |
|
42 | Desired features:
|
43 |
|
44 | * Start with a reference object, mutate until test fails.
|
45 | * Generate objects from scratch.
|
46 | * Use JSON schema as basis for understanding valid/invalid
|
47 |
|
48 | Existing:
|
49 |
|
50 | * [JsonGen](http://stolksdorf.github.io/JsonGen): JavaScript, uses DSL embedded in strings for variation.
|
51 | * [popcorn](https://github.com/asmyczek/popcorn): Embedded DSL in JavaScript, pluggable generators. Not changed since 2010
|
52 | * [fuzzer](https://www.npmjs.com/package/fuzzer): JavaScript NPM package, mutates JS objects. Tailored for HTTP API fuzzing
|
53 | * [hotfuzz](https://www.npmjs.com/package/hotfuzz): JavaScript, for testing Jade templates. Mutates ref object
|
54 | * [json-fuzz-generator](https://github.com/deme0607/json-fuzz-generator): Ruby gem, generates valid or invalid data from JSON schema.
|
55 | * [Peach JSON tutorial](http://www.rockfishsec.com/2014/01/fuzzing-vulnserver-with-peach-3.html): Peach is generic with protocols/format described in XML
|
56 |
|
57 |
|
58 | ## Invariant-based testing
|
59 |
|
60 | An invariant is something that should hold true always (or for a wide set of inputs).
|
61 | Unlike a testcase they are not tested explicity with assertion on a particular output.
|
62 | Instead it should be possible to describe invariants, and attach something that validates
|
63 | them when running all the regular testcases.
|
64 | This would be especially important in combination with fuzzing, since can then
|
65 | generate and validate large sets of testcases.
|
66 |
|
67 | Examples:
|
68 |
|
69 | * Output is always valid according to a JSON schema
|
70 | * One part of object always has same relation to another. Larger than, equal, not-equal
|
71 | * Output always obeys some constraint beyond that of schema. Number always multiple of 10
|
72 |
|
73 | Related:
|
74 |
|
75 | * [Agree](https://github.com/jonnor/agree), contracts-programming for JavaScript, allows expressing invariants
|
76 |
|
77 |
|
78 | ## Related
|
79 |
|
80 | * https://github.com/microflo/microflo/blob/master/doc/braindump.md#correctness-testing
|
81 |
|