UNPKG

27.5 kBMarkdownView Raw
1# Abstract NoSQL Database [![Build Status](https://secure.travis-ci.org/snowyu/node-abstract-nosql.png?branch=master)](http://travis-ci.org/snowyu/node-abstract-nosql)
2
3[![NPM](https://nodei.co/npm/abstract-nosql.png?downloads=true&downloadRank=true)](https://nodei.co/npm/abstract-nosql/)
4[![NPM](https://nodei.co/npm-dl/abstract-nosql.png?months=6&height=3)](https://nodei.co/npm/abstract-nosql/)
5
6
7Abstract-nosql package is modified from abstract-leveldown to enhance the synchronous methods supports for development a node nosql database quickly and using easily.
8And streamable ability is added to abstract-nosql.
9
10abstract-nosql Interface is neutral. There is no bias neither synchronous bias nor asynchronous bias. So that more people choose according to their own manner. For myself, I am not very concerned about the performance of javascript, I am more concerned about the efficiency of its development, as well as through functional programming (functions, closures such a simple concept) extend out of the rich and wonderful world. I still can not help but to think about performance issues. Asynchronous itself produces a small gap, because javascript reason this gap is magnified.
11
12just saying that the asynchronous and synchronous consideration, if a function is only 1% of the opportunity to visit the IO, most of the time (99%) are in memory access. I want to different considerations, have different choices. And this decision is unlikely that done by the interface instead.
13
14Synchronous operation converts into asynchronous operation is easy, and almost no performance loss, in turn, may not. Conversion are many ways, setImmediate is not the best, but it is the simplest one.
15ES6 generator or [node-fibers](https://github.com/laverdet/node-fibers) could be a better way. the coroutine/fiber is lighter and more efficient than thread.
16
17The setImmediate package could be extended to use different implementation(setImmediate, nextTick, ES6 generator, node-fiber) in different environment.
18So the simulated asynchronous uses this way, if you do not implement the asynchronous methods.
19
20## About LevelDOWN
21
22An abstract prototype matching the **[LevelDOWN](https://github.com/rvagg/node-leveldown/)** API. Useful for extending **[LevelUP](https://github.com/rvagg/node-levelup)** functionality by providing a replacement to LevelDOWN.
23
24As of version 0.7, LevelUP allows you to pass a `'db'` option when you create a new instance. This will override the default LevelDOWN store with a LevelDOWN API compatible object.
25
26**Abstract LevelDOWN** provides a simple, operational *noop* base prototype that's ready for extending. By default, all operations have sensible "noops" (operations that essentially do nothing). For example, simple operations such as `.open(callback)` and `.close(callback)` will simply invoke the callback (on a *next tick*). More complex operations perform sensible actions, for example: `.get(key, callback)` will always return a `'NotFound'` `Error` on the callback.
27
28You add functionality by implementing the underscore versions of the operations. For example, to implement a `put()` operation you add a `_put()` method to your object. Each of these underscore methods override the default *noop* operations and are always provided with **consistent arguments**, regardless of what is passed in by the client.
29
30Additionally, all methods provide argument checking and sensible defaults for optional arguments. All bad-argument errors are compatible with LevelDOWN (they pass the LevelDOWN method arguments tests). For example, if you call `.open()` without a callback argument you'll get an `Error('open() requires a callback argument')`. Where optional arguments are involved, your underscore methods will receive sensible defaults. A `.get(key, callback)` will pass through to a `._get(key, options, callback)` where the `options` argument is an empty object.
31
32
33## Changes(diference from abstract-leveldown)
34
35+ getBuffer/getBufferSync(key, destBuffer, options) optional method.
36 * the key's value will be put into the destBuffer if destBuffer is not null.
37 * the options.offset added, write to the destBuffer at offset position. offset defaults to 0.
38 * the value will be truncated if the destBuffer.length is less than value's.
39 * return the byte size of value.
40 * the will use the get/getSync to simulate if no \_getBuffer implemented.
41- Remove the AbstractIterator to [abstract-iterator](https://github.com/snowyu/node-abstract-iterator) package
42+ Add the stream ability
43 * You should install [nosql-stream](https://github.com/snowyu/nosql-stream) package first to use this feature.
44+ Add the AbstractError and error code supports.
45* DB constructor allows no location.
46* Add IteratorClass supports.
47+ Add synchronous methods supports.
48 * Add the synchronous methods support now. You can implement the synchronous methods only.
49 * The asynchronous methods will be simulated via these synchronous methods. If you wanna
50 * support the asynchronous methods only, just do not implement these synchronous methods.
51 * But if you wanna support the synchronous only, you should override the asynchronous methods to disable it.
52+ Add isExists/isExistsSync optional method to test key whether exists.
53 * it will use the \_get/\_getSync method if no \_isExists or \_isExistsSync implemented
54+ the AbstractNoSQL class supports events now.
55 * emit `'open'` and `'ready'` event after the database is opened.
56 * emit `'closed'` event after the database is closed.
57+ Add isOpen()/opened to test the database whether opened.
58+ Add mGetSync()/mGet() multi get keys method for the range(Array) option of the Iterator
59 * it will use the \_get/\_getSync method if no \_mGet or \_mGetSync implemented.
60 * Note: mGet/mGetSync return the array of object: [{key:key,value:value}, ...]
61 * But the \_mGet/\_mGetSync return the plain array: [key1, value1, key2, value2, ...]
62 + keys *(bool, default true)* option to return keys or not
63 * return the values array if keys is false
64 + raiseError *(bool, default true)* option to raise or ignore error
65 * some elements will be undefined for the value error if keys is false
66+ Add Iterator.nextSync
67 * note: nextSync return the object: {key:key, value:value}, return false if ending.
68 * But the \_nextSync return the array: [key, value]
69
70## AbstractError Classes
71
72see [abstract-object](https://github.com/snowyu/abstract-object)
73
74### AbstractError
75
76All Errors are derived from the AbstractError.
77
78* Members:
79 * message: the error message.
80 * code: the error code.
81* Methods:
82 * ok()
83 * notFound()
84 * ....
85 * invalidFormat()
86* Class Methods:
87 * AbstractError.isOk(err)
88 * AbstractError.isNotFound(err)
89 * ...
90
91the error codes:
92
93* AbstractError.Ok = 0
94* AbstractError.NotFound = 1
95* AbstractError.Corruption = 2
96* AbstractError.NotSupported = 3
97* AbstractError.InvalidArgument = 4
98* AbstractError.IO = 5
99* AbstractError.NotOpened = 6
100* AbstractError.InvalidType = 7
101* AbstractError.InvalidFormat = 8
102
103
104### Other Error Classes:
105
106* NotFoundError
107* CorruptionError
108* NotSupportedError/NotImplementedError
109* InvalidArgumentError
110* IOError
111* NotOpenedError
112* InvalidTypeError
113* InvalidFormatError
114* OpenError
115* CloseError
116* AlreadyEndError
117
118
119```js
120var OpenError = createError("CanNotOpen", NotOpened)
121var CloseError = createError("CanNotClose", 52)
122var AlreadyEndError = createError("AlreadyEnd", 53)
123```
124
125
126
127## Streamable
128
129Once implements the [AbstractIterator](https://github.com/snowyu/node-abstract-iterator):
130
131* AbstractIterator.\_nextSync() or AbstractIterator.\_next().
132* AbstractIterator.\_endSync() or AbstractIterator.\_end().
133
134the db should be the streamable.
135
136But, you should install the [nosql-stream](https://github.com/snowyu/nosql-stream) package first.
137
138 npm install nosql-stream
139
140see [nosql-stream](https://snowyu/github.com/nosql-stream) for more details
141
142
143### AbstractNoSql.keyStream(createKeyStream)
144
145create a readable stream.
146
147the data item is key.
148
149### AbstractNoSql.valueStream(createValueStream)
150
151create a readable stream.
152
153the data item is value.
154
155### AbstractNoSql.readStream(createReadStream)
156
157create a readable stream.
158
159the data item is an object: {key:key, value:value}.
160
161* AbstractNoSql.readStream([options])
162* AbstractNoSql.createReadStream
163
164__arguments__
165
166* options: the optional options object(note: some options depend on the implementation of the Iterator)
167 * `'next'`: the raw key data to ensure the readStream return keys is greater than the key. See `'last'` event.
168 * note: this will affect the range[gt/gte or lt/lte(reverse)] options.
169 * `'filter'` *(function)*: to filter data in the stream
170 * function filter(key, value) if return:
171 * 0(consts.FILTER_INCLUDED): include this item(default)
172 * 1(consts.FILTER_EXCLUDED): exclude this item.
173 * -1(consts.FILTER_STOPPED): stop stream.
174 * note: the filter function argument 'key' and 'value' may be null, it is affected via keys and values of this options.
175 * `'range'` *(string or array)*: the keys are in the give range as the following format:
176 * string:
177 * "[a, b]": from a to b. a,b included. this means {gte='a', lte = 'b'}
178 * "(a, b]": from a to b. b included, a excluded. this means {gt='a', lte='b'}
179 * "[, b)" from begining to b, begining included, b excluded. this means {lt='b'}
180 * note: this will affect the gt/gte/lt/lte options.
181 * array: the key list to get. eg, ['a', 'b', 'c']
182 * `'gt'` (greater than), `'gte'` (greater than or equal) define the lower bound of the range to be streamed. Only records where the key is greater than (or equal to) this option will be included in the range. When `reverse=true` the order will be reversed, but the records streamed will be the same.
183 * `'lt'` (less than), `'lte'` (less than or equal) define the higher bound of the range to be streamed. Only key/value pairs where the key is less than (or equal to) this option will be included in the range. When `reverse=true` the order will be reversed, but the records streamed will be the same.
184 * `'start', 'end'` legacy ranges - instead use `'gte', 'lte'`
185 * `'match'` *(string)*: use the minmatch to match the specified keys.
186 * Note: It will affect the range[gt/gte or lt/lte(reverse)] options maybe.
187 * `'limit'` *(number, default: `-1`)*: limit the number of results collected by this stream. This number represents a *maximum* number of results and may not be reached if you get to the end of the data first. A value of `-1` means there is no limit. When `reverse=true` the highest keys will be returned instead of the lowest keys.
188 * `'reverse'` *(boolean, default: `false`)*: a boolean, set true and the stream output will be reversed.
189 * `'keys'` *(boolean, default: `true`)*: whether the `'data'` event should contain keys. If set to `true` and `'values'` set to `false` then `'data'` events will simply be keys, rather than objects with a `'key'` property. Used internally by the `createKeyStream()` method.
190 * `'values'` *(boolean, default: `true`)*: whether the `'data'` event should contain values. If set to `true` and `'keys'` set to `false` then `'data'` events will simply be values, rather than objects with a `'value'` property. Used internally by the `createValueStream()` method.
191
192__return__
193
194* object: the read stream object
195
196
197#### Events
198
199the standard `'data'`, '`error'`, `'end'` and `'close'` events are emitted.
200the `'last'` event will be emitted when the last data arrived, the argument is the last raw key.
201if no more data the last key is `undefined`.
202
203```js
204var MemDB = require("memdown-sync")
205
206
207var db1 = MemDB("db1")
208var db2 = MemDB("db2")
209
210var ws = db1.writeStream()
211var ws2 = db2.createWriteStream()
212
213ws.on('error', function (err) {
214 console.log('Oh my!', err)
215})
216ws.on('finish', function () {
217 console.log('Write Stream finish')
218 //read all data through the ReadStream
219 db1.readStream().on('data', function (data) {
220 console.log(data.key, '=', data.value)
221 })
222 .on('error', function (err) {
223 console.log('Oh my!', err)
224 })
225 .on('close', function () {
226 console.log('Stream closed')
227 })
228 .on('end', function () {
229 console.log('Stream closed')
230 })
231 .pipe(ws2) //copy Database db1 to db2:
232})
233
234ws.write({ key: 'name', value: 'Yuri Irsenovich Kim' })
235ws.write({ key: 'dob', value: '16 February 1941' })
236ws.write({ key: 'spouse', value: 'Kim Young-sook' })
237ws.write({ key: 'occupation', value: 'Clown' })
238ws.end()
239```
240
241filter usage:
242
243```js
244db.createReadStream({filter: function(key, value){
245 if (/^hit/.test(key))
246 return db.FILTER_INCLUDED
247 else key == 'endStream'
248 return db.FILTER_STOPPED
249 else
250 return db.FILTER_EXCLUDED
251}})
252 .on('data', function (data) {
253 console.log(data.key, '=', data.value)
254 })
255 .on('error', function (err) {
256 console.log('Oh my!', err)
257 })
258 .on('close', function () {
259 console.log('Stream closed')
260 })
261 .on('end', function () {
262 console.log('Stream closed')
263 })
264```
265
266next and last usage for paged data demo:
267
268``` js
269
270var callbackStream = require('callback-stream')
271
272var lastKey = null;
273
274function nextPage(db, aLastKey, aPageSize, cb) {
275 var stream = db.readStream({next: aLastKey, limit: aPageSize})
276 stream.on('last', function(aLastKey){
277 lastKey = aLastKey;
278 });
279
280 stream.pipe(callbackStream(function(err, data){
281 cb(data, lastKey)
282 }))
283
284}
285
286var pageNo = 1;
287dataCallback = function(data, lastKey) {
288 console.log("page:", pageNo);
289 console.log(data);
290 ++pageNo;
291 if (lastKey) {
292 nextPage(db, lastKey, 10, dataCallback);
293 }
294 else
295 console.log("no more data");
296}
297nextPage(db, lastKey, 10, dataCallback);
298```
299
300## Extensible API
301
302Remember that each of these methods, if you implement them, will receive exactly the number and order of arguments described. Optional arguments will be converted to sensible defaults.
303
304### AbstractNoSql(location)
305
306## Sync Methods
307
308### AbstractNoSql#_isExistsSync(key, options)
309
310this is an optional method for performance.
311
312### AbstractNoSql#_mGetSync(keys, options)
313
314this is an optional method for performance.
315
316__arguments__
317
318* keys *(array)*: the keys array to get.
319* options *(object)*: the options for get.
320
321__return__
322
323* array: [key1, value1, key2, value2, ...]
324
325### AbstractNoSql#_openSync(options)
326### AbstractNoSql#_getSync(key, options)
327### AbstractNoSql#_putSync(key, value, options)
328### AbstractNoSql#_delSync(key, options)
329### AbstractNoSql#_batchSync(array, options)
330
331
332## Async Methods
333
334### AbstractNoSql#_isExists(key, options, callback)
335
336this is an optional method for performance.
337
338### AbstractNoSql#_mGet(keys, options, callback)
339
340this is an optional method for performance.
341
342__arguments__
343
344* keys *(array)*: the keys array to get.
345* options *(object)*: the options for get.
346* callback *(function)*: the callback function
347 * function(err, items)
348 * items: [key1, value1, key2, value2, ...]
349
350### AbstractNoSql#_open(options, callback)
351### AbstractNoSql#_close(callback)
352### AbstractNoSql#_get(key, options, callback)
353### AbstractNoSql#_put(key, value, options, callback)
354### AbstractNoSql#_del(key, options, callback)
355### AbstractNoSql#_batch(array, options, callback)
356
357If `batch()` is called without argument or with only an options object then it should return a `Batch` object with chainable methods. Otherwise it will invoke a classic batch operation.
358
359the batch should be rename to transact more accurate.
360
361<code>batch()</code> can be used for very fast bulk-write operations (both *put* and *delete*). The `array` argument should contain a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation inside LevelDB. Each operation is contained in an object having the following properties: `type`, `key`, `value`, where the *type* is either `'put'` or `'del'`. In the case of `'del'` the `'value'` property is ignored. Any entries with a `'key'` of `null` or `undefined` will cause an error to be returned on the `callback` and any `'type': 'put'` entry with a `'value'` of `null` or `undefined` will return an error.
362
363```js
364var ops = [
365 { type: 'del', key: 'father' }
366 , { type: 'put', key: 'name', value: 'Yuri Irsenovich Kim' }
367 , { type: 'put', key: 'dob', value: '16 February 1941' }
368 , { type: 'put', key: 'spouse', value: 'Kim Young-sook' }
369 , { type: 'put', key: 'occupation', value: 'Clown' }
370]
371
372db.batch(ops, function (err) {
373 if (err) return console.log('Ooops!', err)
374 console.log('Great success dear leader!')
375})
376```
377
378### AbstractNoSql#_chainedBatch()
379
380By default an `batch()` operation without argument returns a blank `AbstractChainedBatch` object. The prototype is available on the main exports for you to extend. If you want to implement chainable batch operations then you should extend the `AbstractChaindBatch` and return your object in the `_chainedBatch()` method.
381
382### AbstractNoSql#_approximateSize(start, end, callback)
383
384### AbstractNoSql#IteratorClass
385
386You can override the `IteratorClass` to your Iterator.
387After override this, it is not necessary to implement the `"_iterator()"` method.
388
389### AbstractNoSql#_iterator(options)
390
391By default an `iterator()` operation returns a blank `AbstractIterator` object. The prototype is available on the main exports for you to extend. If you want to implement iterator operations then you should extend the `AbstractIterator` and return your object in the `_iterator(options)` method.
392
393`AbstractIterator` implements the basic state management found in LevelDOWN. It keeps track of when a `next()` is in progress and when an `end()` has been called so it doesn't allow concurrent `next()` calls, it does it allow `end()` while a `next()` is in progress and it doesn't allow either `next()` or `end()` after `end()` has been called.
394
395__arguments__
396
397* options *(obeject)*: optional object with the following options:
398 * `'gt'` (greater than), `'gte'` (greater than or equal) define the lower bound of the range to be streamed. Only records where the key is greater than (or equal to) this option will be included in the range. When `reverse=true` the order will be reversed, but the records streamed will be the same.
399 * `'lt'` (less than), `'lte'` (less than or equal) define the higher bound of the range to be streamed. Only key/value pairs where the key is less than (or equal to) this option will be included in the range. When `reverse=true` the order will be reversed, but the records streamed will be the same.
400 * `'reverse'` *(boolean, default: `false`)*: a boolean, set true and the stream output will be reversed. Beware that due to the way LevelDB works, a reverse seek will be slower than a forward seek.
401 * `'keys'` *(boolean, default: `true`)*: whether contain keys.
402 * `'values'` *(boolean, default: `true`)*: whether contain values.
403 * `'limit'` *(number, default: `-1`)*: limit the number of results collected by this stream. This number represents a *maximum* number of results and may not be reached if you get to the end of the data first. A value of `-1` means there is no limit. When `reverse=true` the highest keys will be returned instead of the lowest keys.
404 * `'fillCache'` *(boolean, default: `false`)*: wheather LevelDB's LRU-cache should be filled with data read.
405
406
407### AbstractChainedBatch
408
409Provided with the current instance of `AbstractNoSql` by default.
410
411### AbstractChainedBatch#_put(key, value)
412### AbstractChainedBatch#_del(key)
413### AbstractChainedBatch#_clear()
414### AbstractChainedBatch#_write(options, callback)
415
416## Example
417
418A simplistic in-memory LevelDOWN replacement
419
420use sync methods:
421
422
423```js
424var util = require('util')
425 , AbstractNoSql = require('./').AbstractNoSql
426
427// constructor, passes through the 'location' argument to the AbstractNoSql constructor
428function FakeNoSqlDatabase (location) {
429 AbstractNoSql.call(this, location)
430}
431
432// our new prototype inherits from AbstractNoSql
433util.inherits(FakeNoSqlDatabase, AbstractNoSql)
434
435// implement some methods
436
437FakeNoSqlDatabase.prototype._openSync = function (options) {
438 this._store = {}
439 return true
440}
441
442FakeNoSqlDatabase.prototype._putSync = function (key, value, options) {
443 key = '_' + key // safety, to avoid key='__proto__'-type skullduggery
444 this._store[key] = value
445 return true
446}
447
448//the isExists is an optional method:
449FakeNoSqlDatabase.prototype._isExistsSync = function (key, options) {
450 return this._store.hasOwnProperty('_' + key)
451}
452
453FakeNoSqlDatabase.prototype._getSync = function (key, options) {
454 var value = this._store['_' + key]
455 if (value === undefined) {
456 // 'NotFound' error, consistent with LevelDOWN API
457 throw new Error('NotFound')
458 }
459 return value
460}
461
462FakeNoSqlDatabase.prototype._delSync = function (key, options) {
463 delete this._store['_' + key]
464 return true
465}
466
467//use it directly
468
469var db = new FakeNoSqlDatabase()
470
471//sync:
472db.put('foo', 'bar')
473var result = db.get('foo')
474
475//async:
476db.put('foo', 'bar', function (err) {
477 if (err) throw err
478 db.get('foo', function (err, value) {
479 if (err) throw err
480 console.log('Got foo =', value)
481 db.isExists('foo', function(err, isExists){
482 if (err) throw err
483 console.log('isExists foo =', isExists)
484 })
485 })
486})
487
488//stream:
489
490db.readStream().on('data', function(data){
491})
492
493// Or use it in LevelUP
494
495var levelup = require('levelup')
496
497var db = levelup('/who/cares/', {
498 // the 'db' option replaces LevelDOWN
499 db: function (location) { return new FakeNoSqlDatabase(location) }
500})
501
502//async:
503db.put('foo', 'bar', function (err) {
504 if (err) throw err
505 db.get('foo', function (err, value) {
506 if (err) throw err
507 console.log('Got foo =', value)
508 db.isExists('foo', function(err, isExists){
509 if (err) throw err
510 console.log('isExists foo =', isExists)
511 })
512 })
513})
514
515//sync:
516db.put('foo', 'bar')
517console.log(db.get('foo'))
518console.log(db.isExists('foo'))
519```
520
521use async methods(no sync supports):
522
523
524```js
525var util = require('util')
526 , AbstractNoSql = require('./').AbstractNoSql
527
528// constructor, passes through the 'location' argument to the AbstractNoSql constructor
529function FakeNoSqlDatabase (location) {
530 AbstractNoSql.call(this, location)
531}
532
533// our new prototype inherits from AbstractNoSql
534util.inherits(FakeNoSqlDatabase, AbstractNoSql)
535
536// implement some methods
537
538FakeNoSqlDatabase.prototype._open = function (options, callback) {
539 // initialise a memory storage object
540 this._store = {}
541 // optional use of nextTick to be a nice async citizen
542 process.nextTick(function () { callback(null, this) }.bind(this))
543}
544
545FakeNoSqlDatabase.prototype._put = function (key, value, options, callback) {
546 key = '_' + key // safety, to avoid key='__proto__'-type skullduggery
547 this._store[key] = value
548 process.nextTick(callback)
549}
550
551//the isExists is an optional method:
552FakeNoSqlDatabase.prototype._isExists = function (key, options, callback) {
553 var value = this._store.hasOwnProperty('_' + key)
554 process.nextTick(function () {
555 callback(null, value)
556 })
557}
558
559FakeNoSqlDatabase.prototype._get = function (key, options, callback) {
560 var value = this._store['_' + key]
561 if (value === undefined) {
562 // 'NotFound' error, consistent with LevelDOWN API
563 return process.nextTick(function () { callback(new Error('NotFound')) })
564 }
565 process.nextTick(function () {
566 callback(null, value)
567 })
568}
569
570FakeNoSqlDatabase.prototype._del = function (key, options, callback) {
571 delete this._store['_' + key]
572 process.nextTick(callback)
573}
574
575// now use it in LevelUP
576
577var levelup = require('levelup')
578
579var db = levelup('/who/cares/', {
580 // the 'db' option replaces LevelDOWN
581 db: function (location) { return new FakeNoSqlDatabase(location) }
582})
583
584db.put('foo', 'bar', function (err) {
585 if (err) throw err
586 db.get('foo', function (err, value) {
587 if (err) throw err
588 console.log('Got foo =', value)
589 })
590})
591```
592
593See [MemDOWN-sync](https://github.com/snowyu/node-memdown-sync/) if you are looking for a complete in-memory replacement for AbstractNoSql database.
594
595
596<a name="contributing"></a>
597Contributing
598------------
599
600Abstract LevelDOWN is an **OPEN Open Source Project**. This means that:
601
602> Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
603
604See the [CONTRIBUTING.md](https://github.com/rvagg/node-levelup/blob/master/CONTRIBUTING.md) file for more details.
605
606### Contributors
607
608Abstract LevelDOWN is only possible due to the excellent work of the following contributors:
609
610<table><tbody>
611<tr><th align="left">Riceball LEE</th><td><a href="https://github.com/snowyu">GitHub/snowyu</a></td><td>&nbsp;</td></tr>
612<tr><th align="left">Rod Vagg</th><td><a href="https://github.com/rvagg">GitHub/rvagg</a></td><td><a href="http://twitter.com/rvagg">Twitter/@rvagg</a></td></tr>
613<tr><th align="left">John Chesley</th><td><a href="https://github.com/chesles/">GitHub/chesles</a></td><td><a href="http://twitter.com/chesles">Twitter/@chesles</a></td></tr>
614<tr><th align="left">Jake Verbaten</th><td><a href="https://github.com/raynos">GitHub/raynos</a></td><td><a href="http://twitter.com/raynos2">Twitter/@raynos2</a></td></tr>
615<tr><th align="left">Dominic Tarr</th><td><a href="https://github.com/dominictarr">GitHub/dominictarr</a></td><td><a href="http://twitter.com/dominictarr">Twitter/@dominictarr</a></td></tr>
616<tr><th align="left">Max Ogden</th><td><a href="https://github.com/maxogden">GitHub/maxogden</a></td><td><a href="http://twitter.com/maxogden">Twitter/@maxogden</a></td></tr>
617<tr><th align="left">Lars-Magnus Skog</th><td><a href="https://github.com/ralphtheninja">GitHub/ralphtheninja</a></td><td><a href="http://twitter.com/ralphtheninja">Twitter/@ralphtheninja</a></td></tr>
618<tr><th align="left">David Björklund</th><td><a href="https://github.com/kesla">GitHub/kesla</a></td><td><a href="http://twitter.com/david_bjorklund">Twitter/@david_bjorklund</a></td></tr>
619<tr><th align="left">Julian Gruber</th><td><a href="https://github.com/juliangruber">GitHub/juliangruber</a></td><td><a href="http://twitter.com/juliangruber">Twitter/@juliangruber</a></td></tr>
620<tr><th align="left">Paolo Fragomeni</th><td><a href="https://github.com/hij1nx">GitHub/hij1nx</a></td><td><a href="http://twitter.com/hij1nx">Twitter/@hij1nx</a></td></tr>
621<tr><th align="left">Anton Whalley</th><td><a href="https://github.com/No9">GitHub/No9</a></td><td><a href="https://twitter.com/antonwhalley">Twitter/@antonwhalley</a></td></tr>
622<tr><th align="left">Matteo Collina</th><td><a href="https://github.com/mcollina">GitHub/mcollina</a></td><td><a href="https://twitter.com/matteocollina">Twitter/@matteocollina</a></td></tr>
623<tr><th align="left">Pedro Teixeira</th><td><a href="https://github.com/pgte">GitHub/pgte</a></td><td><a href="https://twitter.com/pgte">Twitter/@pgte</a></td></tr>
624<tr><th align="left">James Halliday</th><td><a href="https://github.com/substack">GitHub/substack</a></td><td><a href="https://twitter.com/substack">Twitter/@substack</a></td></tr>
625<tr><th align="left">Thomas Watson Steen</th><td><a href="https://github.com/watson">GitHub/watson</a></td><td><a href="https://twitter.com/wa7son">Twitter/@wa7son</a></td></tr>
626</tbody></table>
627
628<a name="license"></a>
629License &amp; copyright
630-------------------
631
632Copyright (c) 2012-2014 Abstract LevelDOWN contributors (listed above).
633
634Abstract LevelDOWN is licensed under the MIT license. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE.md file for more details.