browserify-handbook
Version:
how to build modular applications with browserify
1,573 lines (1,178 loc) • 73 kB
Markdown
# introduction
This document covers how to use [browserify](http://browserify.org) to build
modular applications.
[](http://creativecommons.org/licenses/by/3.0/)
browserify is a tool for compiling
[node-flavored](http://nodejs.org/docs/latest/api/modules.html) commonjs modules
for the browser.
You can use browserify to organize your code and use third-party libraries even
if you don't use [node](http://nodejs.org) itself in any other capacity except
for bundling and installing packages with npm.
The module system that browserify uses is the same as node, so
packages published to [npm](https://npmjs.org) that were originally intended for
use in node but not browsers will work just fine in the browser too.
Increasingly, people are publishing modules to npm which are intentionally
designed to work in both node and in the browser using browserify and many
packages on npm are intended for use in just the browser.
[npm is for all javascript](http://maxogden.com/node-packaged-modules.html),
front or backend alike.
# table of contents
- [introduction](#introduction)
- [table of contents](#table-of-contents)
- [node packaged manuscript](#node-packaged-manuscript)
- [node packaged modules](#node-packaged-modules)
- [require](#require)
- [exports](#exports)
- [bundling for the browser](#bundling-for-the-browser)
- [how browserify works](#how-browserify-works)
- [how node_modules works](#how-node_modules-works)
- [why concatenate](#why-concatenate)
- [development](#development)
- [source maps](#source-maps)
- [exorcist](#exorcist)
- [auto-recompile](#auto-recompile)
- [watchify](#watchify)
- [beefy](#beefy)
- [wzrd](#wzrd)
- [browserify-middleware, enchilada](#browserify-middleware-enchilada)
- [livereactload](#livereactload)
- [budo](#budo)
- [using the api directly](#using-the-api-directly)
- [grunt](#grunt)
- [gulp](#gulp)
- [builtins](#builtins)
- [Buffer](#Buffer)
- [process](#process)
- [global](#global)
- [__filename](#__filename)
- [__dirname](#__dirname)
- [transforms](#transforms)
- [writing your own](#writing-your-own)
- [package.json](#package.json)
- [browser field](#browser-field)
- [browserify.transform field](#browserifytransform-field)
- [finding good modules](#finding-good-modules)
- [module philosophy](#module-philosophy)
- [organizing modules](#organizing-modules)
- [avoiding ../../../../../../..](#avoiding-)
- [non-javascript assets](#non-javascript-assets)
- [reusable components](#reusable-components)
- [testing in node and the browser](#testing-in-node-and-the-browser)
- [testing libraries](#testing-libraries)
- [code coverage](#code-coverage)
- [testling-ci](#testling-ci)
- [bundling](#bundling)
- [saving bytes](#saving-bytes)
- [standalone](#standalone)
- [external bundles](#external-bundles)
- [ignoring and excluding](#ignoring-and-excluding)
- [browserify cdn](#browserify-cdn)
- [shimming](#shimming)
- [browserify-shim](#browserify-shim)
- [partitioning](#partitioning)
- [factor-bundle](#factor-bundle)
- [partition-bundle](#partition-bundle)
- [compiler pipeline](#compiler-pipeline)
- [build your own browserify](#build-your-own-browserify)
- [labeled phases](#labeled-phases)
- [deps](#deps)
- [insert-module-globals](#insert-module-globals)
- [json](#json)
- [unbom](#unbom)
- [syntax](#syntax)
- [sort](#sort)
- [dedupe](#dedupe)
- [label](#label)
- [emit-deps](#emit-deps)
- [debug](#debug)
- [pack](#pack)
- [wrap](#wrap)
- [browser-unpack](#browser-unpack)
- [plugins](#plugins)
- [using plugins](#using-plugins)
- [authoring plugins](#authoring-plugins)
# node packaged manuscript
You can install this handbook with npm, appropriately enough. Just do:
```
npm install -g browserify-handbook
```
Now you will have a `browserify-handbook` command that will open this readme
file in your `$PAGER`. Otherwise, you may continue reading this document as you
are presently doing.
# node packaged modules
Before we can dive too deeply into how to use browserify and how it works, it is
important to first understand how the
[node-flavored version](http://nodejs.org/docs/latest/api/modules.html)
of the commonjs module system works.
## require
In node, there is a `require()` function for loading code from other files.
If you install a module with [npm](https://npmjs.org):
```
npm install uniq
```
Then in a file `nums.js` we can `require('uniq')`:
```
var uniq = require('uniq');
var nums = [ 5, 2, 1, 3, 2, 5, 4, 2, 0, 1 ];
console.log(uniq(nums));
```
The output of this program when run with node is:
```
$ node nums.js
[ 0, 1, 2, 3, 4, 5 ]
```
You can require relative files by requiring a string that starts with a `.`. For
example, to load a file `foo.js` from `main.js`, in `main.js` you can do:
``` js
var foo = require('./foo.js');
console.log(foo(4));
```
If `foo.js` was in the parent directory, you could use `../foo.js` instead:
``` js
var foo = require('../foo.js');
console.log(foo(4));
```
or likewise for any other kind of relative path. Relative paths are always
resolved with respect to the invoking file's location.
Note that `require()` returned a function and we assigned that return value to a
variable called `uniq`. We could have picked any other name and it would have
worked the same. `require()` returns the exports of the module name that you
specify.
How `require()` works is unlike many other module systems where imports are akin
to statements that expose themselves as globals or file-local lexicals with
names declared in the module itself outside of your control. Under the node
style of code import with `require()`, someone reading your program can easily
tell where each piece of functionality came from. This approach scales much
better as the number of modules in an application grows.
## exports
To export a single thing from a file so that other files may import it, assign
over the value at `module.exports`:
``` js
module.exports = function (n) {
return n * 111
};
```
Now when some module `main.js` loads your `foo.js`, the return value of
`require('./foo.js')` will be the exported function:
``` js
var foo = require('./foo.js');
console.log(foo(5));
```
This program will print:
```
555
```
You can export any kind of value with `module.exports`, not just functions.
For example, this is perfectly fine:
``` js
module.exports = 555
```
and so is this:
``` js
var numbers = [];
for (var i = 0; i < 100; i++) numbers.push(i);
module.exports = numbers;
```
There is another form of doing exports specifically for exporting items onto an
object. Here, `exports` is used instead of `module.exports`:
``` js
exports.beep = function (n) { return n * 1000 }
exports.boop = 555
```
This program is the same as:
``` js
module.exports.beep = function (n) { return n * 1000 }
module.exports.boop = 555
```
because `module.exports` is the same as `exports` and is initially set to an
empty object.
Note however that you can't do:
``` js
// this doesn't work
exports = function (n) { return n * 1000 }
```
because the export value lives on the `module` object, and so assigning a new
value for `exports` instead of `module.exports` masks the original reference.
Instead if you are going to export a single item, always do:
``` js
// instead
module.exports = function (n) { return n * 1000 }
```
If you're still confused, try to understand how modules work in
the background:
``` js
var module = {
exports: {}
};
// If you require a module, it's basically wrapped in a function
(function(module, exports) {
exports = function (n) { return n * 1000 };
}(module, module.exports))
console.log(module.exports); // it's still an empty object :(
```
Most of the time, you will want to export a single function or constructor with
`module.exports` because it's usually best for a module to do one thing.
The `exports` feature was originally the primary way of exporting functionality
and `module.exports` was an afterthought, but `module.exports` proved to be much
more useful in practice at being more direct, clear, and avoiding duplication.
In the early days, this style used to be much more common:
foo.js:
``` js
exports.foo = function (n) { return n * 111 }
```
main.js:
``` js
var foo = require('./foo.js');
console.log(foo.foo(5));
```
but note that the `foo.foo` is a bit superfluous. Using `module.exports` it
becomes more clear:
foo.js:
``` js
module.exports = function (n) { return n * 111 }
```
main.js:
``` js
var foo = require('./foo.js');
console.log(foo(5));
```
## bundling for the browser
To run a module in node, you've got to start from somewhere.
In node you pass a file to the `node` command to run a file:
```
$ node robot.js
beep boop
```
In browserify, you do this same thing, but instead of running the file, you
generate a stream of concatenated javascript files on stdout that you can write
to a file with the `>` operator:
```
$ browserify robot.js > bundle.js
```
Now `bundle.js` contains all the javascript that `robot.js` needs to work.
Just plop it into a single script tag in some html:
``` html
<html>
<body>
<script src="bundle.js"></script>
</body>
</html>
```
Bonus: if you put your script tag right before the `</body>`, you can use all of
the dom elements on the page without waiting for a dom onready event.
There are many more things you can do with bundling. Check out the bundling
section elsewhere in this document.
## how browserify works
Browserify starts at the entry point files that you give it and searches for any
`require()` calls it finds using
[static analysis](http://npmjs.org/package/detective)
of the source code's
[abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree).
For every `require()` call with a string in it, browserify resolves those module
strings to file paths and then searches those file paths for `require()` calls
recursively until the entire dependency graph is visited.
Each file is concatenated into a single javascript file with a minimal
`require()` definition that maps the statically-resolved names to internal IDs.
This means that the bundle you generate is completely self-contained and has
everything your application needs to work with a pretty negligible overhead.
For more details about how browserify works, check out the compiler pipeline
section of this document.
## how node_modules works
node has a clever algorithm for resolving modules that is unique among rival
platforms.
Instead of resolving packages from an array of system search paths like how
`$PATH` works on the command line, node's mechanism is local by default.
If you `require('./foo.js')` from `/beep/boop/bar.js`, node will
look for `./foo.js` in `/beep/boop/foo.js`. Paths that start with a `./` or
`../` are always local to the file that calls `require()`.
If however you require a non-relative name such as `require('xyz')` from
`/beep/boop/foo.js`, node searches these paths in order, stopping at the first
match and raising an error if nothing is found:
```
/beep/boop/node_modules/xyz
/beep/node_modules/xyz
/node_modules/xyz
```
For each `xyz` directory that exists, node will first look for a
`xyz/package.json` to see if a `"main"` field exists. The `"main"` field defines
which file should take charge if you `require()` the directory path.
For example, if `/beep/node_modules/xyz` is the first match and
`/beep/node_modules/xyz/package.json` has:
```
{
"name": "xyz",
"version": "1.2.3",
"main": "lib/abc.js"
}
```
then the exports from `/beep/node_modules/xyz/lib/abc.js` will be returned by
`require('xyz')`.
If there is no `package.json` or no `"main"` field, `index.js` is assumed:
```
/beep/node_modules/xyz/index.js
```
If you need to, you can reach into a package to pick out a particular file. For
example, to load the `lib/clone.js` file from the `dat` package, just do:
```
var clone = require('dat/lib/clone.js')
```
The recursive node_modules resolution will find the first `dat` package up the
directory hierarchy, then the `lib/clone.js` file will be resolved from there.
This `require('dat/lib/clone.js')` approach will work from any location where
you can `require('dat')`.
node also has a mechanism for searching an array of paths, but this mechanism is
deprecated and you should be using `node_modules/` unless you have a very good
reason not to.
The great thing about node's algorithm and how npm installs packages is that you
can never have a version conflict, unlike most every other platform. npm
installs the dependencies of each package into `node_modules`.
Each library gets its own local `node_modules/` directory where its dependencies
are stored and each dependency's dependencies has its own `node_modules/`
directory, recursively all the way down.
This means that packages can successfully use different versions of libraries in
the same application, which greatly decreases the coordination overhead
necessary to iterate on APIs. This feature is very important for an ecosystem
like npm where there is no central authority to manage how packages are
published and organized. Everyone may simply publish as they see fit and not
worry about how their dependency version choices might impact other dependencies
included in the same application.
You can leverage how `node_modules/` works to organize your own local
application modules too. See the `avoiding ../../../../../../..` section for
more.
## why concatenate
Browserify is a build step that runs on the server. It generates a single bundle
file that has everything in it.
Here are some other ways of implementing module systems for the browser and what
their strengths and weaknesses are:
### window globals
Instead of a module system, each file defines properties on the window global
object or develops an internal namespacing scheme.
This approach does not scale well without extreme diligence since each new file
needs an additional `<script>` tag in all of the html pages where the
application will be rendered. Further, the files tend to be very order-sensitive
because some files need to be included before other files the expect globals to
already be present in the environment.
It can be difficult to refactor or maintain applications built this way.
On the plus side, all browsers natively support this approach and no server-side
tooling is required.
This approach tends to be very slow since each `<script>` tag initiates a
new round-trip http request.
### concatenate
Instead of window globals, all the scripts are concatenated beforehand on the
server. The code is still order-sensitive and difficult to maintain, but loads
much faster because only a single http request for a single `<script>` tag needs
to execute.
Without source maps, exceptions thrown will have offsets that can't be easily
mapped back to their original files.
### AMD
Instead of using `<script>` tags, every file is wrapped with a `define()`
function and callback. [This is AMD](http://requirejs.org/docs/whyamd.html).
The first argument is an array of modules to load that maps to each argument
supplied to the callback. Once all the modules are loaded, the callback fires.
``` js
define(['jquery'] , function ($) {
return function () {};
});
```
You can give your module a name in the first argument so that other modules can
include it.
There is a commonjs sugar syntax that stringifies each callback and scans it for
`require()` calls
[with a regexp](https://github.com/jrburke/requirejs/blob/master/require.js#L17).
Code written this way is much less order-sensitive than concatenation or globals
since the order is resolved by explicit dependency information.
For performance reasons, most of the time AMD is bundled server-side into a
single file and during development it is more common to actually use the
asynchronous feature of AMD.
### bundling commonjs server-side
If you're going to have a build step for performance and a sugar syntax for
convenience, why not scrap the whole AMD business altogether and bundle
commonjs? With tooling you can resolve modules to address order-sensitivity and
your development and production environments will be much more similar and less
fragile. The CJS syntax is nicer and the ecosystem is exploding because of node
and npm.
You can seamlessly share code between node and the browser. You just need a
build step and some tooling for source maps and auto-rebuilding.
Plus, we can use node's module lookup algorithms to save us from version
mismatch insanity so that we can have multiple conflicting versions of different
required packages in the same application and everything will still work. To
save bytes down the wire you can dedupe, which is covered elsewhere in this
document.
# development
Concatenation has some downsides, but these can be very adequately addressed
with development tooling.
## source maps
Browserify supports a `--debug`/`-d` flag and `opts.debug` parameter to enable
source maps. Source maps tell the browser to convert line and column offsets for
exceptions thrown in the bundle file back into the offsets and filenames of the
original sources.
The source maps include all the original file contents inline so that you can
simply put the bundle file on a web server and not need to ensure that all the
original source contents are accessible from the web server with paths set up
correctly.
### exorcist
The downside of inlining all the source files into the inline source map is that
the bundle is twice as large. This is fine for debugging locally but not
practical for shipping source maps to production. However, you can use
[exorcist](https://npmjs.org/package/exorcist) to pull the inline source map out
into a separate `bundle.map.js` file:
``` sh
browserify main.js --debug | exorcist bundle.js.map > bundle.js
```
## auto-recompile
Running a command to recompile your bundle every time can be slow and tedious.
Luckily there are many tools to solve this problem. Some of these tools support
live-reloading to various degrees and others have a more traditional manual
refresh cycle.
These are just a few of the tools you can use, but there are many more on npm!
There are many different tools here that encompass many different tradeoffs and
development styles. It can be a little bit more work up-front to find the tools
that responate most strongly with your own personal expectations and experience,
but I think this diversity helps programmers to be more effective and provides
more room for creativity and experimentation. I think diversity in tooling and a
smaller browserify core is healthier in the medium to long term than picking a
few "winners" by including them in browserify core (which creates all kinds of
havoc in meaningful versioning and bitrot in core).
That said, here are a few modules you might want to consider for setting up a
browserify development workflow. But keep an eye out for other tools not (yet)
on this list!
### [watchify](https://npmjs.org/package/watchify)
You can use `watchify` interchangeably with `browserify` but instead of writing
to an output file once, watchify will write the bundle file and then watch all
of the files in your dependency graph for changes. When you modify a file, the
new bundle file will be written much more quickly than the first time because of
aggressive caching.
You can use `-v` to print a message every time a new bundle is written:
```
$ watchify browser.js -d -o static/bundle.js -v
610598 bytes written to static/bundle.js 0.23s
610606 bytes written to static/bundle.js 0.10s
610597 bytes written to static/bundle.js 0.14s
610606 bytes written to static/bundle.js 0.08s
610597 bytes written to static/bundle.js 0.08s
610597 bytes written to static/bundle.js 0.19s
```
Here is a handy configuration for using watchify and browserify with the
package.json "scripts" field:
``` json
{
"build": "browserify browser.js -o static/bundle.js",
"watch": "watchify browser.js -o static/bundle.js --debug --verbose",
}
```
To build the bundle for production do `npm run build` and to watch files for
during development do `npm run watch`.
[Learn more about `npm run`](http://substack.net/task_automation_with_npm_run).
### [beefy](https://www.npmjs.org/package/beefy)
If you would rather spin up a web server that automatically recompiles your code
when you modify it, check out [beefy](http://didact.us/beefy/).
Just give beefy an entry file:
```
beefy main.js
```
and it will set up shop on an http port.
### [wzrd](https://github.com/maxogden/wzrd)
In a similar spirit to beefy but in a more minimal form is
[wzrd](https://github.com/maxogden/wzrd).
Just `npm install -g wzrd` then you can do:
```
wzrd app.js
```
and open up http://localhost:9966 in your browser.
### browserify-middleware, enchilada
If you are using express, check out
[browserify-middleware](https://www.npmjs.org/package/browserify-middleware)
or [enchilada](https://www.npmjs.org/package/enchilada).
They both provide middleware you can drop into an express application for
serving browserify bundles.
### [livereactload](https://github.com/milankinen/livereactload)
livereactload is a tool for [react](https://github.com/facebook/react)
that automatically updates your web page state when you modify your code.
livereactload is just an ordinary browserify transform that you can load with
`-t livereactload`, but you should consult the
[project readme](https://github.com/milankinen/livereactload#livereactload)
for more information.
### [budo](https://github.com/mattdesl/budo)
budo is a browserify development server with a focus on incremental bundling and
live reloading, including for css.
First make sure the `watchify` command is installed along with budo:
```
npm install -g watchify budo
```
then tell budo to watch a file and listen on http://localhost:9966
```
budo app.js
```
Now every time you update `app.js` or any other file in your dependency graph,
the code will update after a refresh.
or to automatically reload the page live when a file changes, you can do:
```
budo app.js --live
```
Check out [budo-chrome](https://github.com/mattdesl/budo-chrome) for a way to
configure budo to update the code live without even reloading the page
(sometimes called hot reloading).
## using the api directly
You can just use the API directly from an ordinary `http.createServer()` for
development too:
``` js
var browserify = require('browserify');
var http = require('http');
http.createServer(function (req, res) {
if (req.url === '/bundle.js') {
res.setHeader('content-type', 'application/javascript');
var b = browserify(__dirname + '/main.js').bundle();
b.on('error', console.error);
b.pipe(res);
}
else res.writeHead(404, 'not found')
});
```
## grunt
If you use grunt, you'll probably want to use the
[grunt-browserify](https://www.npmjs.org/package/grunt-browserify) plugin.
## gulp
If you use gulp, you should use the browserify API directly.
Here is
[a guide for getting started](http://viget.com/extend/gulp-browserify-starter-faq)
with gulp and browserify.
Here is a guide on how to [make browserify builds fast with watchify using
gulp](https://github.com/gulpjs/gulp/blob/master/docs/recipes/fast-browserify-builds-with-watchify.md)
from the official gulp recipes.
# builtins
In order to make more npm modules originally written for node work in the
browser, browserify provides many browser-specific implementations of node core
libraries:
* [assert](https://npmjs.org/package/assert)
* [buffer](https://npmjs.org/package/buffer)
* [console](https://npmjs.org/package/console-browserify)
* [constants](https://npmjs.org/package/constants-browserify)
* [crypto](https://npmjs.org/package/crypto-browserify)
* [domain](https://npmjs.org/package/domain-browser)
* [events](https://npmjs.org/package/events)
* [http](https://npmjs.org/package/http-browserify)
* [https](https://npmjs.org/package/https-browserify)
* [os](https://npmjs.org/package/os-browserify)
* [path](https://npmjs.org/package/path-browserify)
* [punycode](https://npmjs.org/package/punycode)
* [querystring](https://npmjs.org/package/querystring)
* [stream](https://npmjs.org/package/stream-browserify)
* [string_decoder](https://npmjs.org/package/string_decoder)
* [timers](https://npmjs.org/package/timers-browserify)
* [tty](https://npmjs.org/package/tty-browserify)
* [url](https://npmjs.org/package/url)
* [util](https://npmjs.org/package/util)
* [vm](https://npmjs.org/package/vm-browserify)
* [zlib](https://npmjs.org/package/browserify-zlib)
events, stream, url, path, and querystring are particularly useful in a browser
environment.
Additionally, if browserify detects the use of `Buffer`, `process`, `global`,
`__filename`, or `__dirname`, it will include a browser-appropriate definition.
So even if a module does a lot of buffer and stream operations, it will probably
just work in the browser, so long as it doesn't do any server IO.
If you haven't done any node before, here are some examples of what each of
those globals can do. Note too that these globals are only actually defined when
you or some module you depend on uses them.
## [Buffer](http://nodejs.org/docs/latest/api/buffer.html)
In node all the file and network APIs deal with Buffer chunks. In browserify the
Buffer API is provided by [buffer](https://www.npmjs.org/package/buffer), which
uses augmented typed arrays in a very performant way with fallbacks for old
browsers.
Here's an example of using `Buffer` to convert a base64 string to hex:
```
var buf = Buffer('YmVlcCBib29w', 'base64');
var hex = buf.toString('hex');
console.log(hex);
```
This example will print:
```
6265657020626f6f70
```
## [process](http://nodejs.org/docs/latest/api/process.html#process_process)
In node, `process` is a special object that handles information and control for
the running process such as environment, signals, and standard IO streams.
Of particular consequence is the `process.nextTick()` implementation that
interfaces with the event loop.
In browserify the process implementation is handled by the
[process module](https://www.npmjs.org/package/process) which just provides
`process.nextTick()` and little else.
Here's what `process.nextTick()` does:
```
setTimeout(function () {
console.log('third');
}, 0);
process.nextTick(function () {
console.log('second');
});
console.log('first');
```
This script will output:
```
first
second
third
```
`process.nextTick(fn)` is like `setTimeout(fn, 0)`, but faster because
`setTimeout` is artificially slower in javascript engines for compatibility reasons.
## [global](http://nodejs.org/docs/latest/api/all.html#all_global)
In node, `global` is the top-level scope where global variables are attached
similar to how `window` works in the browser. In browserify, `global` is just an
alias for the `window` object.
## [__filename](http://nodejs.org/docs/latest/api/all.html#all_filename)
`__filename` is the path to the current file, which is different for each file.
To prevent disclosing system path information, this path is rooted at the
`opts.basedir` that you pass to `browserify()`, which defaults to the
[current working directory](https://en.wikipedia.org/wiki/Current_working_directory).
If we have a `main.js`:
``` js
var bar = require('./foo/bar.js');
console.log('here in main.js, __filename is:', __filename);
bar();
```
and a `foo/bar.js`:
``` js
module.exports = function () {
console.log('here in foo/bar.js, __filename is:', __filename);
};
```
then running browserify starting at `main.js` gives this output:
```
$ browserify main.js | node
here in main.js, __filename is: /main.js
here in foo/bar.js, __filename is: /foo/bar.js
```
## [__dirname](http://nodejs.org/docs/latest/api/all.html#all_dirname)
`__dirname` is the directory of the current file. Like `__filename`, `__dirname`
is rooted at the `opts.basedir`.
Here's an example of how `__dirname` works:
main.js:
``` js
require('./x/y/z/abc.js');
console.log('in main.js __dirname=' + __dirname);
```
x/y/z/abc.js:
``` js
console.log('in abc.js, __dirname=' + __dirname);
```
output:
```
$ browserify main.js | node
in abc.js, __dirname=/x/y/z
in main.js __dirname=/
```
# transforms
Instead of browserify baking in support for everything, it supports a flexible
transform system that are used to convert source files in-place.
This way you can `require()` files written in coffee script or templates and
everything will be compiled down to javascript.
To use [coffeescript](http://coffeescript.org/) for example, you can use the
[coffeeify](https://www.npmjs.org/package/coffeeify) transform.
Make sure you've installed coffeeify first with `npm install coffeeify` then do:
```
$ browserify -t coffeeify main.coffee > bundle.js
```
or with the API you can do:
```
var b = browserify('main.coffee');
b.transform('coffeeify');
```
The best part is, if you have source maps enabled with `--debug` or
`opts.debug`, the bundle.js will map exceptions back into the original coffee
script source files. This is very handy for debugging with firebug or chrome
inspector.
## writing your own
Transforms implement a simple streaming interface. Here is a transform that
replaces `$CWD` with the `process.cwd()`:
``` js
var through = require('through2');
module.exports = function (file) {
return through(function (buf, enc, next) {
this.push(buf.toString('utf8').replace(/\$CWD/g, process.cwd()));
next();
});
};
```
The transform function fires for every `file` in the current package and returns
a transform stream that performs the conversion. The stream is written to and by
browserify with the original file contents and browserify reads from the stream
to obtain the new contents.
Simply save your transform to a file or make a package and then add it with
`-t ./your_transform.js`.
For more information about how streams work, check out the
[stream handbook](https://github.com/substack/stream-handbook).
# package.json
## browser field
You can define a `"browser"` field in the package.json of any package that will
tell browserify to override lookups for the main field and for individual
modules.
If you have a module with a main entry point of `main.js` for node but have a
browser-specific entry point at `browser.js`, you can do:
``` json
{
"name": "mypkg",
"version": "1.2.3",
"main": "main.js",
"browser": "browser.js"
}
```
Now when somebody does `require('mypkg')` in node, they will get the exports
from `main.js`, but when they do `require('mypkg')` in a browser, they will get
the exports from `browser.js`.
Splitting up whether you are in the browser or not with a `"browser"` field in
this way is greatly preferrable to checking whether you are in a browser at
runtime because you may want to load different modules based on whether you are
in node or the browser. If the `require()` calls for both node and the browser
are in the same file, browserify's static analysis will include everything
whether you use those files or not.
You can do more with the "browser" field as an object instead of a string.
For example, if you only want to swap out a single file in `lib/` with a
browser-specific version, you could do:
``` json
{
"name": "mypkg",
"version": "1.2.3",
"main": "main.js",
"browser": {
"lib/foo.js": "lib/browser-foo.js"
}
}
```
or if you want to swap out a module used locally in the package, you can do:
``` json
{
"name": "mypkg",
"version": "1.2.3",
"main": "main.js",
"browser": {
"fs": "level-fs-browser"
}
}
```
You can ignore files (setting their contents to the empty object) by setting
their values in the browser field to `false`:
``` json
{
"name": "mypkg",
"version": "1.2.3",
"main": "main.js",
"browser": {
"winston": false
}
}
```
The browser field *only* applies to the current package. Any mappings you put
will not propagate down to its dependencies or up to its dependents. This
isolation is designed to protect modules from each other so that when you
require a module you won't need to worry about any system-wide effects it might
have. Likewise, you shouldn't need to wory about how your local configuration
might adversely affect modules far away deep into your dependency graph.
## browserify.transform field
You can configure transforms to be automatically applied when a module is loaded
in a package's `browserify.transform` field. For example, we can automatically
apply the [brfs](https://npmjs.org/package/brfs) transform with this
package.json:
``` json
{
"name": "mypkg",
"version": "1.2.3",
"main": "main.js",
"browserify": {
"transform": [ "brfs" ]
}
}
```
Now in our `main.js` we can do:
``` js
var fs = require('fs');
var src = fs.readFileSync(__dirname + '/foo.txt', 'utf8');
module.exports = function (x) { return src.replace(x, 'zzz') };
```
and the `fs.readFileSync()` call will be inlined by brfs without consumers of
the module having to know. You can apply as many transforms as you like in the
transform array and they will be applied in order.
Like the `"browser"` field, transforms configured in package.json will only
apply to the local package for the same reasons.
### configuring transforms
Sometimes a transform takes configuration options on the command line. To apply these
from package.json you can do the following.
**on the command line**
```
browserify -t coffeeify \
-t [ browserify-ngannotate --ext .coffee ] \
index.coffee > index.js
```
**in package.json**
``` json
"browserify": {
"transform": [
"coffeeify",
["browserify-ngannotate", {"ext": ".coffee"}]
]
}
```
# finding good modules
Here are [some useful heuristics](http://substack.net/finding_modules)
for finding good modules on npm that work in the browser:
* I can install it with npm
* code snippet on the readme using require() - from a quick glance I should see
how to integrate the library into what I'm presently working on
* has a very clear, narrow idea about scope and purpose
* knows when to delegate to other libraries - doesn't try to do too many things itself
* written or maintained by authors whose opinions about software scope,
modularity, and interfaces I generally agree with (often a faster shortcut
than reading the code/docs very closely)
* inspecting which modules depend on the library I'm evaluating - this is baked
into the package page for modules published to npm
Other metrics like number of stars on github, project activity, or a slick
landing page, are not as reliable.
## module philosophy
People used to think that exporting a bunch of handy utility-style things would
be the main way that programmers would consume code because that is the primary
way of exporting and importing code on most other platforms and indeed still
persists even on npm.
However, this
[kitchen-sink mentality](https://github.com/substack/node-mkdirp/issues/17)
toward including a bunch of thematically-related but separable functionality
into a single package appears to be an artifact for the difficulty of
publishing and discovery in a pre-github, pre-npm era.
There are two other big problems with modules that try to export a bunch of
functionality all in one place under the auspices of convenience: demarcation
turf wars and finding which modules do what.
Packages that are grab-bags of features
[waste a ton of time policing boundaries](https://github.com/jashkenas/underscore/search?q=%22special-case%22&ref=cmdform&type=Issues)
about which new features belong and don't belong.
There is no clear natural boundary of the problem domain in this kind of package
about what the scope is, it's all
[somebody's smug opinion](http://david.heinemeierhansson.com/2012/rails-is-omakase.html).
Node, npm, and browserify are not that. They are avowedly ala-carte,
participatory, and would rather celebrate disagreement and the dizzying
proliferation of new ideas and approaches than try to clamp down in the name of
conformity, standards, or "best practices".
Nobody who needs to do gaussian blur ever thinks "hmm I guess I'll start checking
generic mathematics, statistics, image processing, and utility libraries to see
which one has gaussian blur in it. Was it stats2 or image-pack-utils or
maths-extra or maybe underscore has that one?"
No. None of this. Stop it. They `npm search gaussian` and they immediately see
[ndarray-gaussian-filter](https://npmjs.org/package/ndarray-gaussian-filter) and
it does exactly what they want and then they continue on with their actual
problem instead of getting lost in the weeds of somebody's neglected grand
utility fiefdom.
# organizing modules
## avoiding ../../../../../../..
Not everything in an application properly belongs on the public npm and the
overhead of setting up a private npm or git repo is still rather large in many
cases. Here are some approaches for avoiding the `../../../../../../../`
relative paths problem.
### symlink
The simplest thing you can do is to symlink your app root directory into your
node_modules/ directory.
Did you know that [symlinks work on windows
too](http://www.howtogeek.com/howto/windows-vista/using-symlinks-in-windows-vista/)?
To link a `lib/` directory in your project root into `node_modules`, do:
```
ln -s ../lib node_modules/app
```
and now from anywhere in your project you'll be able to require files in `lib/`
by doing `require('app/foo.js')` to get `lib/foo.js`.
### node_modules
People sometimes object to putting application-specific modules into
node_modules because it is not obvious how to check in your internal modules
without also checking in third-party modules from npm.
The answer is quite simple! If you have a `.gitignore` file that ignores
`node_modules`:
```
node_modules
```
You can just add an exception with `!` for each of your internal application
modules:
```
node_modules/*
!node_modules/foo
!node_modules/bar
```
Please note that you can't *unignore* a subdirectory,
if the parent is already ignored. So instead of ignoring `node_modules`,
you have to ignore every directory *inside* `node_modules` with the
`node_modules/*` trick, and then you can add your exceptions.
Now anywhere in your application you will be able to `require('foo')` or
`require('bar')` without having a very large and fragile relative path.
If you have a lot of modules and want to keep them more separate from the
third-party modules installed by npm, you can just put them all under a
directory in `node_modules` such as `node_modules/app`:
```
node_modules/app/foo
node_modules/app/bar
```
Now you will be able to `require('app/foo')` or `require('app/bar')` from
anywhere in your application.
In your `.gitignore`, just add an exception for `node_modules/app`:
```
node_modules/*
!node_modules/app
```
If your application had transforms configured in package.json, you'll need to
create a separate package.json with its own transform field in your
`node_modules/foo` or `node_modules/app/foo` component directory because
transforms don't apply across module boundaries. This will make your modules
more robust against configuration changes in your application and it will be
easier to independently reuse the packages outside of your application.
### custom paths
You might see some places talk about using the `$NODE_PATH` environment variable
or `opts.paths` to add directories for node and browserify to look in to find
modules.
Unlike most other platforms, using a shell-style array of path directories with
`$NODE_PATH` is not as favorable in node compared to making effective use of the
`node_modules` directory.
This is because your application is more tightly coupled to a runtime
environment configuration so there are more moving parts and your application
will only work when your environment is setup correctly.
node and browserify both support but discourage the use of `$NODE_PATH`.
## non-javascript assets
There are many
[browserify transforms](https://github.com/substack/node-browserify/wiki/list-of-transforms)
you can use to do many things. Commonly, transforms are used to include
non-javascript assets into bundle files.
### brfs
One way of including any kind of asset that works in both node and the browser
is brfs.
brfs uses static analysis to compile the results of `fs.readFile()` and
`fs.readFileSync()` calls down to source contents at compile time.
For example, this `main.js`:
``` js
var fs = require('fs');
var html = fs.readFileSync(__dirname + '/robot.html', 'utf8');
console.log(html);
```
applied through brfs would become something like:
``` js
var fs = require('fs');
var html = "<b>beep boop</b>";
console.log(html);
```
when run through brfs.
This is handy because you can reuse the exact same code in node and the browser,
which makes sharing modules and testing much simpler.
`fs.readFile()` and `fs.readFileSync()` accept the same arguments as in node,
which makes including inline image assets as base64-encoded strings very easy:
``` js
var fs = require('fs');
var imdata = fs.readFileSync(__dirname + '/image.png', 'base64');
var img = document.createElement('img');
img.setAttribute('src', 'data:image/png;base64,' + imdata);
document.body.appendChild(img);
```
If you have some css you want to inline into your bundle, you can do that too
with the assistence of a module such as
[insert-css](https://npmjs.org/package/insert-css):
``` js
var fs = require('fs');
var insertStyle = require('insert-css');
var css = fs.readFileSync(__dirname + '/style.css', 'utf8');
insertStyle(css);
```
Inserting css this way works fine for small reusable modules that you distribute
with npm because they are fully-contained, but if you want a more wholistic
approach to asset management using browserify, check out
[atomify](https://www.npmjs.org/package/atomify) and
[parcelify](https://www.npmjs.org/package/parcelify).
### hbsify
### jadeify
### reactify
## reusable components
Putting these ideas about code organization together, we can build a reusable UI
component that we can reuse across our application or in other applications.
Here is a bare-bones example of an empty widget module:
``` js
module.exports = Widget;
function Widget (opts) {
if (!(this instanceof Widget)) return new Widget(opts);
this.element = document.createElement('div');
}
Widget.prototype.appendTo = function (target) {
if (typeof target === 'string') target = document.querySelector(target);
target.appendChild(this.element);
};
```
Handy javascript constructor tip: you can include a `this instanceof Widget`
check like above to let people consume your module with `new Widget` or
`Widget()`. It's nice because it hides an implementation detail from your API
and you still get the performance benefits and indentation wins of using
prototypes.
To use this widget, just use `require()` to load the widget file, instantiate
it, and then call `.appendTo()` with a css selector string or a dom element.
Like this:
``` js
var Widget = require('./widget.js');
var w = Widget();
w.appendTo('#container');
```
and now your widget will be appended to the DOM.
Creating HTML elements procedurally is fine for very simple content but gets
very verbose and unclear for anything bigger. Luckily there are many transforms
available to ease importing HTML into your javascript modules.
Let's extend our widget example using [brfs](https://npmjs.org/package/brfs). We
can also use [domify](https://npmjs.org/package/domify) to turn the string that
`fs.readFileSync()` returns into an html dom element:
``` js
var fs = require('fs');
var domify = require('domify');
var html = fs.readFileSync(__dirname + '/widget.html', 'utf8');
module.exports = Widget;
function Widget (opts) {
if (!(this instanceof Widget)) return new Widget(opts);
this.element = domify(html);
}
Widget.prototype.appendTo = function (target) {
if (typeof target === 'string') target = document.querySelector(target);
target.appendChild(this.element);
};
```
and now our widget will load a `widget.html`, so let's make one:
``` html
<div class="widget">
<h1 class="name"></h1>
<div class="msg"></div>
</div>
```
It's often useful to emit events. Here's how we can emit events using the
built-in `events` module and the [inherits](https://npmjs.org/package/inherits)
module:
``` js
var fs = require('fs');
var domify = require('domify');
var inherits = require('inherits');
var EventEmitter = require('events').EventEmitter;
var html = fs.readFileSync(__dirname + '/widget.html', 'utf8');
inherits(Widget, EventEmitter);
module.exports = Widget;
function Widget (opts) {
if (!(this instanceof Widget)) return new Widget(opts);
this.element = domify(html);
}
Widget.prototype.appendTo = function (target) {
if (typeof target === 'string') target = document.querySelector(target);
target.appendChild(this.element);
this.emit('append', target);
};
```
Now we can listen for `'append'` events on our widget instance:
``` js
var Widget = require('./widget.js');
var w = Widget();
w.on('append', function (target) {
console.log('appended to: ' + target.outerHTML);
});
w.appendTo('#container');
```
We can add more methods to our widget to set elements on the html:
``` js
var fs = require('fs');
var domify = require('domify');
var inherits = require('inherits');
var EventEmitter = require('events').EventEmitter;
var html = fs.readFileSync(__dirname + '/widget.html', 'utf8');
inherits(Widget, EventEmitter);
module.exports = Widget;
function Widget (opts) {
if (!(this instanceof Widget)) return new Widget(opts);
this.element = domify(html);
}
Widget.prototype.appendTo = function (target) {
if (typeof target === 'string') target = document.querySelector(target);
target.appendChild(this.element);
};
Widget.prototype.setName = function (name) {
this.element.querySelector('.name').textContent = name;
}
Widget.prototype.setMessage = function (msg) {
this.element.querySelector('.msg').textContent = msg;
}
```
If setting element attributes and content gets too verbose, check out
[hyperglue](https://npmjs.org/package/hyperglue).
Now finally, we can toss our `widget.js` and `widget.html` into
`node_modules/app-widget`. Since our widget uses the
[brfs](https://npmjs.org/package/brfs) transform, we can create a `package.json`
with:
``` json
{
"name": "app-widget",
"version": "1.0.0",
"private": true,
"main": "widget.js",
"browserify": {
"transform": [ "brfs" ]
},
"dependencies": {
"brfs": "^1.1.1",
"inherits": "^2.0.1"
}
}
```
And now whenever we `require('app-widget')` from anywhere in our application,
brfs will be applied to our `widget.js` automatically!
Our widget can even maintain its own dependencies. This way we can update
dependencies in one widgets without worrying about breaking changes cascading
over into other widgets.
Make sure to add an exclusion in your `.gitignore` for
`node_modules/app-widget`:
```
node_modules/*
!node_modules/app-widget
```
You can read more about [shared rendering in node and the
browser](http://substack.net/shared_rendering_in_node_and_the_browser) if you
want to learn about sharing rendering logic between node and the browser using
browserify and some streaming html libraries.
# testing in node and the browser
Testing modular code is very easy! One of the biggest benefits of modularity is
that your interfaces become much easier to instantiate in isolation and so it's
easy to make automated tests.
Unfortunately, few testing libraries play nicely out of the box with modules and
tend to roll their own idiosyncratic interfaces with implicit globals and obtuse
flow control that get in the way of a clean design with good separation.
People also make a huge fuss about "mocking" but it's usually not necessary if
you design your modules with testing in mind. Keeping IO separate from your
algorithms, carefully restricting the scope of your module, and accepting
callback parameters for different interfaces can all make your code much easier
to test.
For example, if you have a library that does both IO and speaks a protocol,
[consider separating the IO layer from the
protocol](https://www.youtube.com/watch?v=g5ewQEuXjsQ#t=12m30)
using an interface like [streams](https://github.com/substack/stream-handbook).
Your code will be easier to test and reusable in different contexts that you
didn't initially envision. This is a recurring theme of testing: if your code is
hard to test, it is probably not modular enough or contains the wrong balance of
abstractions. Testing should not be an afterthought, it should inform your
whole design and it will help you to write better interfaces.
## testing libraries
### [tape](https://npmjs.org/package/tape)
Tape was specifically designed from the start to work well in both node and
browserify. Suppose we have an `index.js` with an async interface:
``` js
module.exports = function (x, cb) {
setTimeout(function () {
cb(x * 100);
}, 1000);
};
```
Here's how we can test this module using [tape](https://npmjs.org/package/tape).
Let's put this file in `test/beep.js`:
``` js
var test = require('tape');
var hundreder = require('../');
test('beep', function (t) {
t.plan(1);
hundreder(5, function (n) {
t.equal(n, 500, '5*100 === 500');
});
});
```
Because the test file lives in `test/`, we can require the `index.js` in the
parent directory by doing `require('../')`. `index.js` is the default place that
node and browserify look for a module if there is no package.json in that
directory with a `main` field.
We can `require()` tape like any other library after it has been installed with
`npm install tape`.
The string `'beep'` is an optional name for the test.
The 3rd argument to `t.equal()` is a completel