For more granular control over what external modules are included in your build, you can use the following features:
--include <name> and --exclude <name> allow you to include or exclude a specific module in your build. Excluded modules are looked up from the global context (e.g. if there is a global require() function, it will be called for those modules). Module names are resolved relative to the base path of the build.
--remap <name>=<expression> allows you to bind external modules to any expression which will be evaluated client-side. For example you might want to load jQuery externally from window.$ using --remap jquery=window.$.
Switching between development and production modes:
Example:
if (development) {
app.use();
} else {
glue.package();
app.use(express.static(outDir));
}
Multiple entry point bundles, which make use of a set of shared modules. For example:
You can run a build which produces a file called shared.js which contains the modules which are used by both index.js and admin.js:
glue({
include: [ './index.js', './admin.js' ],
...
out: fs.createWriteStream('./shared.js')
});
Implement factor-bundle or equivalent.
Easiest path here probably is to move to a module-deps compatible output format for the transform runner.
https://github.com/substack/module-deps
To exclude all modules, use --no-externals. This option forces the build to only contain files that are not under the node_modules folder.
--no-externals: this option prevents any modules from under node_modules from being included.
--only-externals: this option only bundles modules under node_modules.
--extensions. By default, only .js and .json files are included in builds.
If you want to add further extensions, such as .coffee or .jade, you can use the --extensions option to add more supported extensions. For example: --extensions .jade will allow you to require('./foo') and have it match ./foo.jade in the same folder.
--command extension=str and --transform extension=str: these extended versions of command and transform specification allow you to load and apply transforms on specific file extensions. For example: --transform .jade=jadeify will apply the jadeify transform on .jade files.
Also full build result transform?
For external source maps, one can extract from the output.
Related:
--ignore-missing: causes missing modules to be ignored, require() on them returns {} --exclude-missing: causes missing modules to be excluded, require() on them will be looked up from the higher-level scope, if not found, an error is thrown. Default: warn about missing
cache cleanLook into how deamdify, deglobalify and es6ify might be made to work w/gluejs.
https://github.com/substack/node-browserify/pull/336
browserify.transform field in package.json
By default browserify considers only \*.js files in such cases.
Note, that if files do not contain javascript source code then you also need to specify a corresponding transform for them.
Strip BOMs
https://github.com/substack/node-browserify/issues/313
Optional deduplication (based on contents). Optional because there are some edge cases e.g. 507.
Tests
- Strip # (probably already OK, just add test)
- gluejs --include . => return equivalent package
- parse invalid json file
- transforms installed globally should also work
- require() a core module should look in your node_modules/ directory before using one of its browser builtins
- infer-packages should work when main is . or main is empty
- allow core modules to be ignored
- allow core modules to be ignored in package.json browser field
- --standalone A.B.C should construct nested objects
--no-parse file support
via:
How to set up a multi-page gluejs-based project that has the following goals:
--compare: compares the trees from using the detective strategy vs the default strategyInteroperability with libraries that export globals.
Two options:
--remap and remap the global, include two files instead of one filemodule.exports = global, using command:--shim { file: "", name: "", global: "", deps: "" }.
This wraps the file in a way that the global variable is available as require(name).
-> this is basically --remap name=require(file) --no-parse file --append-text file "module.exports = global;"
Mocking out dependencies for testing: This is probably more useful when used via the Express middleware, but --remap name=path allows specific externals to be replaced, which can be used for testing:
// example
Might be nice to make this even easier to use from tests... via REST API?
--dedupe-force modulename should force only a single instance of a module, in spite of conflicting package.json dataCleaner separation between Map and Reduce phases.
During the Map phase, a number of tasks are applied to each file.
The tasks take one input, run it through transforms, and write a file into cache. If nothing needs to be done, then a simple direct-read tuple is created.
The Map phase uses a shared queue which supports incremental task queueing.
During the reduce phase, the list of metadata tuples is converted into a set of packages using package inference. Then, the underlying data for each metadata tuple is read in serial order and written to the final output. On read, the final wrapping is done, utilizing the inferred packages.
If any tasks require the package metadata, then they must be performed during the reduce phase. In theory one could add another map phase after packages have been inferred.
-- check for full build match --
Transform queue (transforms/index.js):
[ Initialize queue ]
[ Add new file ]
[ Check that file has not been processed ]
[ Check for cached results, and return if done ]
[ Apply user and other exclusions ]
[ Queue task ]
[ Task run]
[ Match tasks ]
[ If no tasks, just run the parse-result-and-update-deps ]
[ Push deps to queue when done ]
[ If transformations, append parse-result-and-update-deps task ]
[ Run transforms ]
[ Push deps to queue when done ]
[ Start running the queue ]
[ Once done ]
[ Check queue for more, assign if under parallelism limit ]
{ filename: path-to-file, content: path-to-file, deps: [ "str", "str2" ] }
{ filename: "original-path-to-file", content: path-to-result-file, deps: [ "str", "str2" ] }
-- generate joinable list --
Queue tasks:
in a generic way by defining a bunch of callbacks, rather than doing these each in ugly and ad-hoc ways.
--shim:
--no-externals:
{
expr: '*'
type: 'package-filter',
task: function() {
return false;
}
}
--only-externals:
{
expr: base + '/**',
phase: 'file-filter',
task: function() {
return false;
}
}
--ignore:
argv.ignore.toExpr()
{
expr: file paths,
phase: 'file-filter',
task: function() {
// no need to parse the file since it's always an empty file
self.addResult(filename, self.ignoreFile, [], [], []);
// queue has been updated, finish this task
self.emit('miss', filename);
return false;
}
}
{
expr: packages,
phase: 'package-filter',
task: function() {
return false;
}
}
Package generator queue (commonjs2/index.js):
[ Infer packages ]
[ Infer package deps ]
- for --parse: just collect
- for --no-parse: guess (e.g. modules in folders at higher levels, and one-level-removed child node_modules)
[ Update reporter size during read ]
[ Wrap file during read (w/ full meta?) ]
{
id: ..,
main: "original-name",
basepath: ...,
files: [ ... ],
deps: { "name": "target" }
}
[ Join files ]
-- generate full build --
Test cases:
--global-require example (e.g. how to make require('foo') work in arbitrary script tags)TODO
--cache-clean: Clears the global cache folder by deleting it. This is always done before processing anything else.
There is a builtin Mocha test server task, which takes a set of tests as input and creates a server specifically for running one or more tests.
If you're not using Mocha, you can still use the API to create a package and serve it (with test-framework wrapping).