UNPKG

22.4 kBMarkdownView Raw
1# logtron
2
3[![build status](https://secure.travis-ci.org/uber/logtron.png)](http://travis-ci.org/uber/logtron)
4
5logger used in realtime
6
7## Example
8
9```js
10var Logger = require('logtron');
11
12var statsd = StatsdClient(...)
13
14/* configure your logger
15
16 - pass in meta data to describe your service
17 - pass in your backends of choice
18*/
19var logger = Logger({
20 meta: {
21 team: 'my-team',
22 project: 'my-project'
23 },
24 backends: Logger.defaultBackends({
25 logFolder: '/var/log/nodejs',
26 console: true,
27 kafka: { proxyHost: 'localhost', proxyPort: 9093 },
28 sentry: { id: '{sentryId}' }
29 }, {
30 // pass in a statsd client to turn on an airlock prober
31 // on the kafka and sentry connection
32 statsd: statsd
33 })
34});
35
36/* now write your app and use your logger */
37var http = require('http');
38
39var server = http.createServer(function (req, res) {
40 logger.info('got a request', {
41 uri: req.url
42 });
43
44 res.end('hello world');
45});
46
47server.listen(8000, function () {
48 var addr = server.address();
49 logger.info('server bound', {
50 port: addr.port,
51 address: addr.address
52 });
53});
54
55/* maybe some error handling */
56server.on("error", function (err) {
57 logger.error("unknown server error", err);
58});
59```
60
61## Docs
62
63### Type definitions
64
65See [docs.mli](docs.mli) for type definitions
66
67### `var logger = Logger(options)`
68
69```ocaml
70type Backend := {
71 createStream: (meta: Object) => WritableStream
72}
73
74type Entry := {
75 level: String,
76 message: String,
77 meta: Object,
78 path: String
79}
80
81type Logger := {
82 trace: (message: String, meta: Object, cb? Callback) => void,
83 debug: (message: String, meta: Object, cb? Callback) => void,
84 info: (message: String, meta: Object, cb? Callback) => void,
85 access?: (message: String, meta: Object, cb? Callback) => void,
86 warn: (message: String, meta: Object, cb? Callback) => void,
87 error: (message: String, meta: Object, cb? Callback) => void,
88 fatal: (message: String, meta: Object, cb? Callback) => void,
89 writeEntry: (Entry, cb?: Callback) => void,
90 createChild: (path: String, Object<levelName: String>, Object<opts: String>) => Logger
91}
92
93type LogtronLogger := EventEmitter & Logger & {
94 instrument: (server?: HttpServer, opts?: Object) => void,
95 destroy: ({
96 createStream: (meta: Object) => WritableStream
97 }) => void
98}
99
100logtron/logger := ((LoggerOpts) => LogtronLogger) & {
101 defaultBackends: (config: {
102 logFolder?: String,
103 kafka?: {
104 proxyHost: String,
105 proxyPort: Number
106 },
107 console?: Boolean,
108 sentry?: {
109 id: String
110 }
111 }, clients?: {
112 statsd: StatsdClient,
113 kafkaClient?: KafkaClient
114 }) => {
115 disk: Backend | null,
116 kafka: Backend | null,
117 console: Backend | null,
118 sentry: Backend | null
119 }
120}
121```
122
123`Logger` takes a set of meta information for the logger, that
124 will be used by each backend to customize the log formatting
125 and a set of backends that you want to be able to write to.
126
127`Logger` returns a logger object that has some method names
128 in common with `console`.
129
130#### `options.meta.name`
131
132`options.meta.name` is the name of your application, you should
133 supply a string for this option. Various backends may use
134 this value to configure themselves.
135
136For example the `Disk` backend uses the `name` to create a
137 filename for you.
138
139#### `options.meta.team`
140
141`options.meta.team` is the name of the team that this application
142 belongs to. Various backends may use this value to configure
143 themselves.
144
145For example the `Disk` backend uses the` team` to create a
146 filename for you.
147
148####`options.meta.hostname`
149
150`options.meta.hostname` is the the hostname of the server this
151 application is running on. You can use
152 `require('os').hostname()` to get the hostname of your process.
153 Various backends may use this value to configure themselves.
154
155For example the `Sentry` backend uses the `hostname` as meta
156 data to send to sentry so you can identify which host caused
157 the sentry error in their visual error inspector.
158
159#### `options.meta.pid`
160
161`options.meta.pid` is the `pid` of your process. You can get the
162 `pid` of your process by reading `process.pid`. Various
163 backends may use this value to configure themselves.
164
165For example the `Disk` backend or `Console` backend may prepend
166 all log messages or somehow embed the process pid in the log
167 message. This allows you to tail a log and identify which
168 process misbehaves.
169
170#### `options.backends`
171
172`options.backends` is how you specify the backends you want to
173 set for your logger. `backends` should be an object of key
174 value pairs, where the key is the name of the backend and the
175 value is something matching the `Backend` interface.
176
177Out of the box, the `logger` comes with four different backend
178 names it supports, `"disk"`, `"console"`, `"kafka"`
179 and `"sentry"`.
180
181If you want to disable a backend, for example `"console"` then
182 you should just not pass in a console backend to the logger.
183
184A valid `Backend` is an object with a `createStream` method.
185 `createStream` gets passed `options.meta` and must return a
186 `WritableStream`.
187
188There are a set of functions in `logtron/backends` that you
189 require to make the specifying of backends easier.
190
191 - `require('logtron/backends/disk')`
192 - `require('logtron/backends/console')`
193 - `require('logtron/backends/kafka')`
194 - `require('logtron/backends/sentry')`
195
196#### `options.transforms`
197
198`options.transforms` is an optional array of transform functions.
199 The transform functions get called with
200 `[levelName, message, metaObject]` and must return a tuple of
201 `[levelName, message, metaObject]`.
202
203A `transform` is a good place to put transformation logic before
204 it get's logged to a backend.
205
206Each funciton in the transforms array will get called in order.
207
208A good use-case for the transforms array is pretty printing
209 certain objects like `HttpRequest` or `HttpResponse`. Another
210 good use-case is scrubbing sensitive data
211
212#### `logger`
213
214`Logger(options)` returns a `logger` object. The `logger` has
215 a set of logging methods named after the levels for the
216 logger and a `destroy()` method.
217
218Each level method (`info()`, `warn()`, `error()`, etc.) takes
219 a string and an object of more information. You can also pass
220 in an optional callback as the third parameter.
221
222The `string` message argument to the level method should be
223 a static string, not a dynamic string. This allows anyone
224 analyzing the logs to quickly find the callsite in the code
225 and anyone looking at the callsite in the code to quickly
226 grep through the logs to find all prints.
227
228The `object` information argument should be the dynamic
229 information that you want to log at the callsite. Things like
230 an id, an uri, extra information, etc are great things to add
231 here. You should favor placing dynamic information in the
232 information object, not in the message string.
233
234Each level method will write to a different set of backends.
235
236See [bunyan level descriptions][bunyan] for more / alternative
237 suggestions around how to use levels.
238
239#### `logger.trace(message, information, callback?)`
240
241`trace()` will write your log message to the
242 `["console"]` backends.
243
244Note that due to the high volume nature of `trace()` it should
245 not be spamming `"disk"`.
246
247`trace()` is meant to be used to write tracing information to
248 your logger. This is mainly used for high volume performance
249 debugging.
250
251It's expected you change the `trace` level configuration to
252 basically write nowhere in production and be manually toggled
253 on to write to local disk / stdout if you really want to
254 trace a production process.
255
256#### `logger.debug(message, information, callback?)`
257
258`debug()` will write your log message to the
259 `["disk", "console"]` backends.
260
261Note that due to the higher volume nature of `debug()` it should
262 not be spamming `"kafka"`.
263
264`debug()` is meant to be used to write debugging information.
265 debugging information is information that is purely about the
266 code and not about the business logic. You might want to
267 print a debug if there is a programmer bug instead of an
268 application / business logic bug.
269
270If your going to add a high volume `debug()` callsite that will
271 get called a lot or get called in a loop consider using
272 `trace()` instead.
273
274It's expected that the `debug` level is enabled in production
275 by default.
276
277#### `logger.info(message, information, callback?)`
278
279`info()` will write your log message to the
280 `["disk", "kafka", "console"]` backends.
281
282`info()` is meant to used when you want to print informational
283 messages that concern application or business logic. These
284 messages should just record that a "useful thing" has happened.
285
286You should use `warn()` or `error()` if you want to print that
287 a "strange thing" or "wrong thing" has happened
288
289If your going to print information that does not concern
290 business or application logic consider using `debug()` instead.
291
292#### `logger.warn(message, information, callback?)`
293
294`warn()` will write your log message to the
295 `["disk", "kafka", "console"]` backends.
296
297`warn()` is meant to be used when you want to print warning
298 messages that concern application or business logic. These
299 messages should just record that an "unusual thing" has
300 happened.
301
302If your in a code path where you cannot recover or continue
303 cleanly you should consider using `error()` instead. `warn()`
304 is generally used for code paths that are correct but not
305 normal.
306
307#### `logger.error(message, information, callback?)`
308
309`error()` will write your log message to the
310 `["disk", "kafka", "console", "sentry"]` backends.
311
312Note that due to importance of error messages it should be
313 going to `"sentry"` so we can track all errors for an
314 application using sentry.
315
316`error()` is meant to be used when you want to print error
317 messages that concern application or business logic. These
318 messages should just record that a "wrong thing" has happened.
319
320You should use `error()` whenever something incorrect or
321 unhandlable happens.
322
323If your in a code path that is uncommon but still correct
324 consider using `warn()` instead.
325
326#### `logger.fatal(message, information, callback?)`
327
328`fatal()` will write your log message to the
329 `["disk", "kafka", "console", "sentry"]` backends.
330
331`fatal()` is meant to be used to print a fatal error. A fatal
332 error should happen when something unrecoverable happens, i.e.
333 it is fatal for the currently running node process.
334
335You should use `fatal()` when something becomes corrupt and it
336 cannot be recovered without a restart or when key part of
337 infrastructure is fatally missing. You should also use
338 `fatal()` when you interact with an unrecoverable error.
339
340If your error is recoverable or you are not going to shutdown
341 the process you should use `error()` instead.
342
343It's expected that shutdown the process once you have verified
344 that the `fatal()` error message has been logged. You can
345 do either a hard or soft shutdown.
346
347#### `logger.createChild({path: String, levels?, opts?})`
348
349The `createChild` method returns a Logger that will create entries at a
350 nested path.
351
352Paths are lower-case and dot.delimited.
353 Child loggers can be nested within other child loggers to
354 construct deeper paths.
355
356Child loggers implement log level methods for every key in
357 the given levels, or the default levels. The levels must
358 be given as an object, and the values are not important
359 for the use of `createChild`, but `true` will suffice if
360 there isn't an object laying around with the keys you
361 need.
362
363Opts specifies options for the child logger. The available
364 options are to enable strict mode, and to add metadata to
365 each entry. To enable strict mode pass the `strict` key in
366 the options with a true value. In strict mode the child
367 logger will ensure that each log level has a corresponding
368 backend in the parent logger. Otherwise the logger will
369 replace any missing parent methods with a no-op function.
370 If you wish to add meta data to each log entry the child
371 set the `extendMeta` key to `true` and the `meta` to an
372 object with your meta data. The `filterMeta` key takes an
373 array of objects which will create filters that are run
374 at log time. This allows you to automatically add the
375 current value of an object property to the log meta without
376 having to manual add the values at each log site. The format
377 of a filter object is: `{'oject': targetObj, 'mappings': {'src': 'dst', 'src2': 'dst2'}}`.
378 Each filter has an object key which is the target the data
379 will be taken from. The mappings object contains keys which
380 are the src of the data on the target object as a dot path
381 and the destination it will be placed in on the meta object.
382 A log site can still override this destination though.
383
384```js
385
386logger.createChild("requestHandler", {
387 info: true,
388 warn: true,
389 log: true,
390 trace: true
391}, {
392 extendMeta: true,
393 // Each time we log this will include the session key
394 meta: {
395 sessionKey: 'abc123'
396 },
397 // Each time we log this will include if the headers
398 // have been written to the client yet based on the
399 // current value of res.headersSent
400 metaFilter: [
401 {object: res, mappings: {
402 'headersSent' : 'headersSent'
403 }
404 ]
405})
406```
407
408#### `logger.writeEntry(Entry, callback?)`
409
410All of the log level methods internally create an `Entry` and use the
411 `writeEntry` method to send it into routing. Child loggers use this method
412 directly to forward arbitrary entries to the root level logger.
413
414```ocaml
415type Entry := {
416 level: String,
417 message: String,
418 meta: Object,
419 path: String
420}
421```
422
423### `var backends = Logger.defaultBackends(options, clients)`
424
425```ocaml
426type Logger : { ... }
427
428type KafkaClient : Object
429type StatsdClient := {
430 increment: (String) => void
431}
432
433logtron := Logger & {
434 defaultBackends: (config: {
435 logFolder?: String,
436 kafka?: {
437 proxyHost: String,
438 proxyPort: Number
439 },
440 console?: Boolean,
441 sentry?: {
442 id: String
443 }
444 }, clients?: {
445 statsd: StatsdClient,
446 kafkaClient?: KafkaClient,
447 isKafkaDisabled?: () => Boolean
448 }) => {
449 disk: Backend | null,
450 kafka: Backend | null,
451 console: Backend | null,
452 sentry: Backend | null
453 }
454}
455```
456
457Rather then configuring the backends for `logtron` yourself
458 you can use the `defaultBackend` function
459
460`defaultBackends` takes a set of options and returns a hash of
461 backends that you can pass to a logger like
462
463```js
464var logger = Logger({
465 backends: Logger.defaultBackends(backendConfig)
466})
467```
468
469You can also pass `defaultBackends` a `clients` argument to pass
470 in a statsd client. The statsd client will then be passed to the backends so that they can be instrumented with statsd.
471
472You can also configure a reusable `kafkaClient` on the `clients`
473 object. This must be an instance of `uber-nodesol-write`.
474
475#### `options.logFolder`
476
477`options.logFolder` is an optional string, if you want the disk
478 backend enabled you should set this to a folder on disk where
479 you want your disk logs written to.
480
481#### `options.kafka`
482
483`options.kafka` is an optional object, if you want the kafka
484 backend enabled you should set this to an object containing
485 a `"proxyHost"` and `"proxyPort"` key.
486
487`options.kafka.proxyHost` should be a string and is the hostname
488 of the kafka REST proxy server to write to.
489
490`options.kafka.proxyPort` should be a port and is the port
491 of the kafka REST proxy server to write to.
492
493#### `options.console`
494
495`options.console` is an optional boolean, if you want the
496 console backend enabled you should set this to `true`
497
498#### `options.sentry`
499
500`options.sentry` is an optional object, if you want the
501 sentry backend enabled you should set this to an object
502 containing an `"id"` key.
503
504`options.sentry.id` is the dsn uri used to talk to sentry.
505
506#### `clients`
507
508`clients` is an optional object, it contains all the concrete
509 service clients that the backends will use to communicate with
510 external services.
511
512#### `clients.statsd`
513
514If you want you backends instrumented with statsd you should
515 pass in a `statsd` client to `clients.statsd`. This ensures
516 that we enable airlock monitoring on the kafka and sentry
517 backend
518
519#### `clients.kafkaClient`
520
521If you want to re-use a single `kafkaClient` in your application
522 you can pass in an instance of the `uber-nodesol-write` module
523 and the logger will re-use this client isntead of creating
524 its own kafka client.
525
526#### `clients.isKafkaDisabled`
527
528If you want to be able to disable kafka at run time you can
529 pass an `isKafkaDisabled` predicate function.
530
531If this function returns `true` then `logtron` will stop writing
532 to kafka.
533
534### Logging Errors
535
536> I want to log errors when I get them in my callbacks
537
538The `logger` supports passing in an `Error` instance as the
539 metaObject field.
540
541For example:
542
543```js
544fs.readFile(uri, function (err, content) {
545 if (err) {
546 logger.error('got file error', err);
547 }
548})
549```
550
551If you want to add extra information you can also make the err
552 one of the keys in the meta object.
553
554For example:
555
556```js
557fs.readFile(uri, function (err, content) {
558 if (err) {
559 logger.error('got file error', {
560 error: err,
561 uri: uri
562 });
563 }
564})
565```
566
567### Custom levels
568
569> I want to add my own levels to the logger, how can I tweak
570> the logger to use different levels
571
572By default the logger has the levels as specified above.
573
574However you can pass in your own level definition.
575
576#### I want to remove a level
577
578You can set a level to `null` to remove it. For example this is
579how you would remove the `trace()` level.
580
581```js
582var logger = Logger({
583 meta: { ... },
584 backends: { ... },
585 levels: {
586 trace: null
587 }
588})
589```
590
591#### I want to add my own levels
592
593You can add a level to a logger by adding a new `Level` record.
594
595For example this is how you would define an `access` level
596
597```js
598var logger = Logger({
599 meta: {},
600 backends: {},
601 levels: {
602 access: {
603 level: 25,
604 backends: ['disk', 'console']
605 }
606 }
607})
608
609logger.access('got request', {
610 uri: '/some-uri'
611});
612```
613
614This adds an `access()` method to your logger that will write
615 to the backend named `"disk"` and the backend named
616 `"console"`.
617
618#### I want to change an existing level
619
620You can change an existing backend by just redefining it.
621
622For example this is how you mute the trace level
623
624```js
625var logger = Logger({
626 meta: {},
627 backends: {},
628 levels: {
629 trace: {
630 level: 10,
631 backends: []
632 }
633 }
634})
635```
636
637#### I want to add a level that writes to a custom backend
638
639You can add a level that writes to a new backend name and then
640 add a backend with that name
641
642```js
643var logger = Logger({
644 meta: {},
645 backends: {
646 custom: CustomBackend()
647 },
648 levels: {
649 custom: {
650 level: 15,
651 backends: ["custom"]
652 }
653 }
654})
655
656logger.custom('hello', { foo: "bar" });
657```
658
659As long as your `CustomBackend()` returns an object with a
660 `createStream()` method that returns a `WritableStream` this
661 will work like you want it to.
662
663### `var backend = Console()`
664
665```ocaml
666logtron/backends/console := () => {
667 createStream: (meta: Object) => WritableStream
668}
669```
670
671`Console()` can be used to create a backend that writes to the
672 console.
673
674The `Console` backend just writes to stdout.
675
676### `var backend = Disk(options)`
677
678```ocaml
679logtron/backends/disk := (options: {
680 folder: String
681}) => {
682 createStream: (meta: Object) => WritableStream
683}
684```
685
686`Disk(options)` can be used to create a backend that writes to
687 rotating files on disk.
688
689The `Disk` depends on `meta.team` and `meta.project` to be
690 defined on the logger and it uses those to create the filename
691 it will write to.
692
693#### `options.folder`
694
695`options.folder` must be specificied as a string and it
696 determines which folder the `Disk` backend will write to.
697
698### `var backend = Kafka(options)`
699
700```ocaml
701logtron/backends/kafka := (options: {
702 proxyHost: String,
703 proxyPort: Number,
704 statsd?: Object,
705 isDisabled: () => Boolean
706}) => {
707 createStream: (meta: Object) => WritableStream
708}
709```
710
711`Kafka(options)` can be used to create a backend that writes to
712 a kafka topic.
713
714The `Kafka` backend depends on `meta.team` and `meta.project`
715 and uses those to define which topic it will write to.
716
717#### `options.proxyHost`
718
719Specify the `proxyHost` which we should use when connecting to kafka REST proxy
720
721#### `options.proxyPort`
722
723Specify the `proxyPort` which we should use when connecting to kafka REST proxy
724
725#### `options.statsd`
726
727If you pass a `statsd` client to the `Kafka` backend it will use
728 the `statsd` client to record information about the health
729 of the `Kafka` backend.
730
731#### `options.kafkaClient`
732
733If you pass a `kafkaClient` to the `Kafka` backend it will use
734 this to write to kafka instead of creating it's own client.
735 You must ensure this is an instance of the `uber-nodesol-write`
736 module.
737
738#### `options.isDisabled`
739
740If you want to be able to disable this backend at run time you
741 can pass in a predicate function.
742
743When this predicate function returns `true` the `KafkaBackend`
744 will stop writing to kafka.
745
746### `var backend = Sentry(options)`
747
748```ocaml
749logtron/backends/sentry := (options: {
750 dsn: String,
751 statsd?: Object
752}) => {
753 createStream: (meta: Object) => WritableStream
754}
755```
756
757`Sentry(options)` can be used to create a backend that will
758 write to a sentry server.
759
760#### `options.dsn`
761
762Specify the `dsn` host to be used when connection to sentry.
763
764#### `options.statsd`
765
766If you pass a `statsd` client to the `Sentry` backend it will
767 use the `statsd` client to record information about the
768 health of the `Sentry` backend.
769
770## Installation
771
772`npm install logtron`
773
774## Tests
775
776`npm test`
777
778There is a `kafka.js` that will talk to kafka if it is running
779and just gets skipped if its not running.
780
781To run the kafka test you have to run zookeeper & kafka with
782 `npm run start-zk` and `npm run start-kafka`
783
784 [bunyan]: https://github.com/trentm/node-bunyan#levels