
For Anza
—Sufyan bin Uzayr
They tell me we’re living in an information age, but none of it seems to be the information I need or brings me closer to what I want to know. In fact (I’m becoming more and more convinced) all this electronic wizardry only adds to our confusion, delivering inside scoops and verdicts about events that have hardly begun: a torrent of chatter moving at the speed of light, making it nearly impossible for any of the important things to be heard.
—Matthew Flaming, The Kingdom of Ohio
The notion that “technology moves quickly” is a well-worn aphorism, and with good reason: technology does move quickly. But at this moment, JavaScript in particular is moving very quickly indeed—much like that “torrent of chatter moving at the speed of light” that Matthew Flaming refers to in The Kingdom of Ohio . The language is in the midst of what many have called a renaissance, brought about by the rapidly increasing sophistication of browser-based applications and the rising popularity of JavaScript on the server, thanks to Node.js.
An almost feverish pace of innovation is occurring within the JavaScript community that, while endlessly fascinating to follow, also presents some unique challenges of its own. JavaScript’s ecosystem of libraries, frameworks, and utilities has grown dramatically. Where once a small number of solutions for any given problem existed, many can now be found, and the options continue to grow by the day. As a result, developers find themselves faced with the increasingly difficult task of choosing the appropriate tools from among many seemingly good options.
If you’ve ever found yourself wondering why JavaScript seems to be attracting so much attention, as we have, it’s worth stopping for a moment to consider the fact that JavaScript, a language that was created by one person in 10 days, now serves as the foundation upon which much of the Web as we know it sits. A language that was originally created to solve relatively simple problems is now being applied in new and innovative ways that were not originally foreseen. What’s more, JavaScript is a beautifully expressive language, but it’s not without its share of rough edges and potential pitfalls. While flexible, efficient, and ubiquitous, JavaScript concepts such as the event loop and prototypal inheritance can prove particularly challenging for those coming to the language for the first time.
For these and many other reasons, the development community at large is still coming to terms with how best to apply the unique features that JavaScript brings to the table. We’ve no doubt only scratched the surface of what the language and the community behind it are capable of. For those with an insatiable appetite for knowledge and a desire to create, now is the perfect time to be a JavaScript developer.
We have written JavaScript Frameworks for Modern Web Development to serve as your guide to a wide range of popular JavaScript tools that solve difficult problems at both ends of the development stack: in the browser and on the server. The tutorials and downloadable code examples contained within this book illustrate the usage of tools that manage dependencies, structure code in a modular fashion, automate repetitive build tasks, create specialized servers, structure client-side applications, facilitate horizontal scaling, perform event logging, and interact with disparate data stores.
The libraries and frameworks covered include Grunt, Yeoman, PM2, RequireJS, Browserify, Knockout, Angular, Kraken, Mongoose, Knex, Bookshelf, Async.js, Underscore, Lodash, React, and Vue.js.
In writing JavaScript Frameworks for Modern Web Development , our goal was to create a filter for the “torrent of chatter” that often seems to surround JavaScript and, in so doing, to allow what we believe are some important things to be heard. We hope the information contained within these pages proves as useful to you as it has to us.
This book is intended for web developers who are already confident with JavaScript, but also frustrated with the sheer number of solutions that exist for seemingly every problem. This book helps lift the fog, providing the reader with an in-depth guide to specific libraries and frameworks that well-known organizations are using right now with great success. Topics pertaining to both client-side and server-side development are covered. As a result, readers will gain the most benefit from this book if they already have at least an intermediate familiarity with both the web browser Document Object Model (DOM), common client-side libraries like jQuery, and Node.js.
This book covers a wide selection of JavaScript tools that are applicable throughout the entire development process, from a project’s first commit to its first release and beyond. To that end, the chapters have been grouped into the following parts.
Larry Wall, the creator of Perl, describes the three virtues of a great programmer as laziness, impatience, and hubris. In this chapter, we’ll focus on a tool that will help you strengthen the virtue of laziness—Grunt. This popular task runner provides developers with a framework for creating command-line utilities that automate repetitive build tasks such as running tests, concatenating files, compiling SASS/LESS stylesheets, checking for JavaScript errors, and more. After reading this chapter, you’ll know how to use several popular Grunt plugins as well as how to go about creating and sharing your own plugins with the community.
Yeoman provides JavaScript developers with a mechanism for creating reusable templates (“generators”) that describe the overall structure of a project (initially required dependencies, Grunt tasks, etc.) in a way that can be easily reused over and over. Broad community support also allows you to take advantage of a wide variety of preexisting templates. In this chapter, we’ll walk through the process of installing Yeoman and using several popular preexisting generators. Finally, we’ll take a look at how we can create and share our own templates with the community.
In this chapter, we will close out our discussion of development tools by taking a look at PM2, a command-line utility that simplifies many of the tasks associated with running Node applications, monitoring their status, and efficiently scaling them to meet increasing demand.
JavaScript lacks a native method for loading external dependencies in the browser—a frustrating oversight for developers. Fortunately, the community has stepped in to fill this gap with two very different and competing standards: the Asynchronous Module Definition (AMD) API and CommonJS. We’ll dive into the details of both and take a look at widely used implementations of each: RequireJS and Browserify. Each has its merits, which we’ll discuss in detail, but both can have a profoundly positive impact on the way in which you go about structuring your applications.
In recent years, web developers have witnessed a sharp rise in popularity of so-called “single-page apps.” Such applications exhibit behavior once available only on the desktop, but at the expense of increased code complexity within the browser. In this section, we’ll dive into two widely used front-end frameworks that help minimize that complexity by providing proven patterns for solving frequently encountered problems: Knockout and Angular. Knockout focuses on the relationship between view and data, but otherwise leaves the application architecture and plumbing to the developer’s discretion. Angular takes a more prescriptive approach, covering the view, data transfer, Dependency Injection, and so on.
Client-side applications aren’t very useful without a server with which to interact. In this chapter, we’ll take a look at one popular framework that supports developers in the creation of back-end applications: Kraken.
At the core of every application lies the most important component of any development stack—the data that our users seek. In this section, we’ll become familiar with two libraries that help simplify some of the complexity that’s often experienced when interacting with popular storage platforms such as MongoDB, MySQL, PostgreSQL, and SQLite. After reading this section, you’ll be comfortable defining schemas, associations, lifecycle “hooks,” and more.
The asynchronous nature of JavaScript provides developers with a significant degree of flexibility—as opposed to forcing developers to execute their code in a linear fashion, JavaScript allows developers to orchestrate multiple actions simultaneously. Unfortunately, along with this flexibility comes a significant degree of additional complexity—what many developers refer to as “callback hell” or the “pyramid of doom.”
A number of wonderfully useful libraries exist that this book would be remiss not to cover, but for which additional parts are not necessarily warranted. This part will cover such libraries.
Underscore (and its successor, Lodash) is an incredibly useful collection of functions that simplifies many frequently used patterns that can be tedious to implement otherwise. This brief chapter will bring these libraries to your attention, along with some of the more popular extensions that can also be included to enhance their usefulness even further. Examples are included that highlight some of the most frequently used portions of these libraries.
React and Vue.js
In this section, we will cover JavaScript frameworks that are geared for front-end development, such as React and Vue.js.
React, having the backing of Facebook, has risen in popularity in a very short span of time and continues to be a preferred choice for many developers.
On the other hand, Vue.js is a slightly less popular name in the field, but it has been gaining a steady and very loyal following, primarily due to its ease of use and simplicity.
Each chapter in this book contains many examples, the source code for which may be downloaded from www.apress.com/9781484249949 in zipped form.
Most examples are run with the Node.js runtime, which may be obtained from https://nodejs.org . Chapters with additional prerequisites will explain the necessary procedures for downloading and installing the examples. (For example, MongoDB is necessary to run examples in Chapter 9 , which covers Mongoose.)
Any additional steps necessary for running code examples (e.g., executing curl requests) or interacting with a running example (e.g., opening a web browser and navigating to a specific URL) are explained alongside each listing.
My mom and dad, for everything they have done for me
Faisal Fareed and Sadaf Fareed, my siblings, for helping with things back home
Nancy Chen, Content Development Editor for this book, for keeping track of everything and for being very patient as I kept missing one deadline or the other
The Apress team, especially Louise Corrigan, Jade Scard, and James Markham, for ensuring that the book’s content, layout, formatting, and everything else remains perfect throughout
The coauthors of this book’s first edition and the tech reviewer, for going through the manuscript and providing his insight and feedback
Typesetters, cover designers, printers, and everyone else, for their part in the development of this book
All the folks associated with Parakozm, either directly or indirectly, for their help and support
The JavaScript community at large, for all their hard work and efforts
—Sufyan bin Uzayr




Aleem can be found on LinkedIn at www.linkedin.com/in/aleemullah/ .
I’m lazy. But it’s the lazy people who invented the wheel and the bicycle because they didn’t like walking or carrying things.
—Lech Walesa, former president of Poland
In his book Programming Perl, Larry Wall (the well-known creator of the language) puts forth the idea that all successful programmers share three important characteristics: laziness, impatience, and hubris. At first glance, these traits all sound quite negative, but dig a little deeper, and you’ll find the hidden meaning in his statement:
Laziness: Lazy programmers hate to repeat themselves. As a result, they tend to put a lot of effort into creating useful tools that perform repetitive tasks for them. They also tend to document those tools well, to spare themselves the trouble of answering questions about them later.
Impatience: Impatient programmers have learned to expect much from their tools. This expectation teaches them to create software that doesn’t just react to the needs of its users, but that actually attempts to anticipate those needs.
Hubris: Good programmers take great pride in their work. It is this pride that compels them to write software that others won’t want to criticize—the type of work that we should all be striving for.
Script and stylesheet compilation and minification
Testing
Linting
Database migrations
Deployments
Create configurable tasks that automate the repetitive aspects of software development that accompany nearly every project
Interact with the file system using simple yet powerful abstractions provided by Grunt
Publish Grunt plugins from which other developers can benefit and to which they can contribute
Take advantage of Grunt’s preexisting library of community-supported plugins
Grunt provides developers with a toolkit for creating command-line utilities that perform repetitive project tasks. Examples of such tasks include the minification of JavaScript code and the compilation of Sass stylesheets, but there’s no limit to how Grunt can be put to work. Grunt can be used to create simple tasks that address the specific needs of a single project—tasks that you don’t intend to share or reuse—but Grunt’s true power derives from its ability to package tasks as reusable plugins that can then be published, shared, used, and improved upon by others.
Four core components make Grunt tick, which we will now cover.
The hello-world task that we’ve just seen serves as an example of a basic, stand-alone Grunt task. Such tasks can be used to implement simple actions specific to the needs of a single project that you don’t intend to reuse or share. Most of the time, however, you will find yourself interacting not with stand-alone tasks, but instead with tasks that have been packaged as Grunt plugins and published to npm so that others can reuse them and contribute to them.
A Grunt plugin is a collection of configurable tasks (published as an npm package) that can be reused across multiple projects. Thousands of such plugins exist. In Listing 1-2, Grunt’s loadNpmTasks() method is used to load the grunt-contrib-uglify Node module, a Grunt plugin that merges a project’s JavaScript code into a single, minified file that is suitable for deployment.
A list of all available Grunt plugins can be found at http://gruntjs.com/plugins . Plugins whose names are prefixed with contrib- are officially maintained by the developers behind Grunt. In the repository, officially maintained plugins are now also marked by a “star” icon, making them easier to differentiate from third-party developers’ plugins.
Grunt is known for emphasizing “configuration over code”: the creation of tasks and plugins whose functionality is tailored by configuration that is specified within each project. It is this separation of code from configuration that allows developers to create plugins that are easily reusable by others. Later in the chapter, we’ll take a look at the various ways in which Grunt plugins and tasks can be configured.
The Gruntfile shown in Listing 1-7 is for a relatively simple project. We already find this example to be slightly unwieldy, but within larger projects we have seen this file balloon to many times this size. The result is an unreadable and difficult-to-maintain mess. Experienced developers would never write their code in a way that combines functionality from across unrelated areas into a single, monolithic file, so why should we approach our task runner any differently?
As previously mentioned, tasks serve as the foundation on which Grunt is built—everything begins here. A Grunt plugin, as you’ll soon discover, is nothing more than one or more tasks that have been packaged into a Node module and published via npm. We’ve already seen a few examples that demonstrate the creation of basic Grunt tasks, so let’s take a look at some additional features that can help us get the most out of them.
In Listing 1-9, “dot notation” is used for accessing nested configuration values. In the same way, dot notation can be used to set nested configuration values. If at any point Grunt encounters a path within the configuration object that does not exist, Grunt will create a new, empty object without throwing an error.
In addition to basic tasks , Grunt offers support for what it calls “multi-tasks.” Multi-tasks are easily the most complicated aspect of Grunt, so if you find yourself confused at first, you’re not alone. After reviewing a few examples, however, their purpose should start to come into focus—at which point you’ll be well on your way toward mastering Grunt.
In this example, our multi-task ran twice, once for each available target (mammals and birds). Notice in Listing 1-15 that within our multi-task we referenced two properties: this.target and this.data. These properties allow our multi-task to fetch information about the target that it is currently running against.
Multi-task options provide developers with a mechanism for defining global options for a task, which can then be overridden at the target level. In this example, a global format in which to list animals ('array') is defined at the task level. The mammals target has chosen to override this value ('json'), while the birds task has not. As a result, mammals will be displayed as JSON, while birds will be shown as an array due to its inheritance of the global option.
The vast majority of Grunt plugins that you will encounter are configurable as multi-tasks. The flexibility afforded by this approach allows you to apply the same task differently under different circumstances. A frequently encountered scenario involves the creation of separate targets for each build environment. For example, when compiling an application, you may want to modify the behavior of a task based on whether you are compiling for a local development environment or in preparation for release to production.
Listing 1-19 shows a sample Gruntfile that loads the contents of the project’s package.json file using one of several built-in methods for interacting with the file system that are discussed in further detail later in the chapter. The contents of this file are then stored under the pkg key of Grunt’s configuration object. In Listing 1-20, we see a task that is able to directly reference this information through the use of configuration templates.
Method | Description |
|---|---|
grunt.log.write() | Prints a message to the console |
grunt.log.writeln() | Prints a message to the console, followed by a newline character |
grunt.log.oklns() | Prints a success message to the console, followed by a newline character |
grunt.log.error() | Prints an error message to the console, followed by a newline character |
grunt.log.subhead() | Prints a bold message to the console, followed by a newline character |
grunt.log.debug() | Prints a message to the console only if the --debug flag was passed |
Method | Description |
|---|---|
grunt.fail.warn() | Displays a warning and aborts Grunt immediately. Tasks will continue to run if the --force option is passed |
grunt.fail.fatal() | Displays a warning and aborts Grunt immediately |
As a build tool, it comes as no surprise that the majority of Grunt’s plugins interact with the file system in one way or another. Given its importance, Grunt provides helpful abstractions that allow developers to interact with the file system with a minimal amount of boilerplate code.
Method | Description |
|---|---|
grunt.file.read() | Reads and returns file’s contents |
grunt.file.readJSON() | Reads a file’s contents, parsing the data as JSON, and returns the result |
grunt.file.write() | Writes the specified contents to a file, creating intermediate directories, if necessary |
grunt.file.copy() | Copies a source file to a destination path, creating intermediate directories, if necessary |
grunt.file.delete() | Deletes the specified file path; deletes files and folders recursively |
grunt.file.mkdir() | Creates a directory, along with any missing intermediate directories |
grunt.file.recurse() | Recurses into a directory, executing a callback for every file that is found |
Many Grunt tasks that interact with the file system rely heavily on the concept of source-destination mappings, a format that describes a set of files to be processed and a corresponding destination for each. Such mappings can be tedious to construct, but thankfully Grunt provides helpful shortcuts that address this need.
As you can see, our project has an images folder containing three files. Knowing this, let’s take a look at a few ways in which Grunt can help us iterate through these files.
The combination of the src configuration property and the this.files multi-task property provides developers with a concise syntax for iterating over multiple files. The contrived example that we’ve just looked at is fairly simple, but Grunt also provides additional options for tackling more complex scenarios. Let’s take a look.
The expand option, when paired with the dest option, instructs Grunt to iterate through our task’s this.files.forEach loop once for every entry it finds, within which we can find a corresponding dest property. Using this approach, we can easily create source-destination mappings that can be used to copy (or move) files from one location to another.
Checking JavaScript code for errors (i.e., “linting”)
Compiling Sass stylesheets
Running unit tests
Let’s take a look at a few examples that demonstrate such workflows put into action.
In order for this example to function, you must install Compass, an open source CSS authoring framework. You can find additional information on how to install Compass at http://compass-style.org/install .
A large library of community-supported plugins is what makes Grunt truly shine—a library that will allow you to start benefitting from Grunt immediately, without the need to create complex tasks from scratch. If you need to automate a build process within your project, there’s a good chance that someone has already done the “grunt” work (zing!) for you.
In this section, you’ll discover how you can contribute back to the community with Grunt plugins of your own creation.
One of the first things you’ll want to do is create a public GitHub repository in which to store your new plugin. The example that we will be referencing is included with the source code that accompanies this book.
The most important point to note here is that there is no special structure or knowledge required (apart from what has already been covered in this chapter) for the creation of Grunt plugins. The process mirrors that of integrating Grunt into an existing project—the creation of a Gruntfile that loads tasks, along with the tasks themselves. Once published to npm, other Grunt projects will be able to load your plugin in the same way that other plugins have been referenced throughout this chapter.

The output generated by the file-report task
If this is your first time publishing a module to npm, you will be asked to create an account.
What makes Grunt tick (tasks, plugins, and configuration objects)
How to configure tasks and plugins
How to use many of the helpful built-in utilities that Grunt makes available for providing user feedback and interacting with the file system
How to create and share your own Grunt plugins
Grunt: http://gruntjs.com
JSHint: http://jshint.com
grunt-contrib-watch: https://github.com/gruntjs/grunt-contrib-watch
grunt-contrib-jshint: https://github.com/gruntjs/grunt-contrib-jshint
grunt-contrib-uglify: https://github.com/gruntjs/grunt-contrib-uglify
grunt-contrib-compass: https://github.com/gruntjs/grunt-contrib-compass
grunt-mocha-test: https://github.com/pghalliday/grunt-mocha-test
Syntactically Awesome Stylesheets (Sass): http://sass-lang.com
Compass: http://compass-style.org
One only needs two tools in life: WD-40 to make things go, and duct tape to make them stop.
—G. Weilacher
The development community has witnessed a role reversal of sorts take place in recent years. Web applications, once considered by many to be second-class citizens in comparison to their native counterparts, have largely supplanted traditional desktop applications, thanks in large part to the widespread adoption of modern web development technologies and the rise of the mobile Web. But as web applications have grown increasingly sophisticated, so too have the tools on which they rely and the steps required to bootstrap them into existence.
The topic of this chapter, Yeoman, is a popular project “scaffolding” tool that helps to alleviate this problem by automating the tedious tasks associated with bootstrapping new applications off the ground. Yeoman provides a mechanism for creating reusable templates that describe a project’s initial file structure, HTML, third-party libraries, and task runner configurations. These templates, which can be shared with the wider development community via npm, allow developers to bootstrap new projects that follow agreed-upon best practices in a matter of minutes.
Install Yeoman
Take advantage of Yeoman generators that have already been published by the community
Contribute back to the community with your own Yeoman generators
Yeoman allows developers to quickly create the initial structure of an application through the use of reusable templates, which Yeoman refers to as “generators.” To better understand how this process can improve your workflow, let’s create a new project with the help of the modernweb generator that was created specifically for this chapter. Afterward, we will look at how this generator was created, providing you with the knowledge you need to create and share your own custom Yeoman generators with the wider development community.
Grunt
Bower
jQuery
AngularJS
Browserify
Compass
This generator’s name is prefixed with generator-, which is an important convention that all Yeoman generators must follow. At runtime, Yeoman will determine what (if any) generators have been installed by searching for global modules whose names follow this format.
JavaScript libraries are compiled into a single, minified script.
Sass stylesheets are compiled.
The source code of the application itself is compiled via Browserify.
An instance of Express is created to serve our project.
Various watch scripts are initialized that will automatically recompile our project as changes are made.

Our new project’s home page, opened for us by the default Grunt task
bower.json
Gruntfile.js
package.json
public/index.html
In their simplest form, generators act as configurable project templates that simplify the creation of new projects, but that’s not their only purpose. In addition to assisting with the initial creation of new projects, generators can also include other commands that project maintainers will find useful throughout development.
The remainder of this chapter will focus on the creation of a custom Yeoman generator—the same one used in the previous section to bootstrap a new project built around AngularJS (among other tools). Afterward, you will be well prepared to begin creating your own generators that will allow you to quickly get up and running with workflows that meet your specific needs.
Although we are following the same steps that were used to create the modernweb generator that was referenced earlier in this chapter, we are assigning a different name to our new module, so as not to conflict with the one that has already been installed. Also note the inclusion of yeoman-generator within our module’s list of keywords. Yeoman’s web site maintains a list of every generator available within npm, making it easy for developers to find preexisting generators to suit their needs. If a generator is to be included within this list, it must include this keyword, along with a description in its package.json file.
Create a folder named app at the root level of our module.
Create a folder named templates within our new app folder.
Place various files within our templates folder that we want to copy into the target project (e.g., HTML files, Grunt tasks, etc.).
Create the script shown in Listing 2-9, which is responsible for driving the functionality for this command.

The contents of our generator’s app folder. The contents of the templates folder will be copied into the target project.
prompting()
writing()
install()
end()
initializing(): Initialization methods (checking project state, getting configs).
prompting(): Prompting the user for information.
configuring(): Saving configuration files.
default(): Prototype methods with names not included within this list will be executed during this step.
writing(): Write operations specific to this generator occur here.
conflicts(): Conflicts are handled here (used internally by Yeoman).
install(): Installation procedures occur here (npm, bower).
end(): Last function to be called. Cleanup/closing messages.
Once Yeoman has compiled a list of the various prototype methods that exist within our generator, it will execute them in the priority shown in the preceding list.
The process by which Yeoman generators can modify their behavior and alter the contents of templates based on a user’s answers to prompts opens up a lot of exciting possibilities. It allows for the creation of new projects that are dynamically configured according to a user’s specific needs. It’s this aspect of Yeoman, more than any other, that makes the tool truly useful.
After responding to the generator’s questions, Yeoman will move forward with building our new project, just as it did with the modernweb generator that we used in the first half of this chapter. Once this process is finished, run Grunt’s default task—$ grunt—to build and launch the project.
Create a folder named route at the root level of our module.
Create a folder named templates within our new route folder.
Place various files within our templates folder that we want to copy into the target project.
Create the script shown in Listing 2-14, which is responsible for driving the functionality for the route command.
The script shown in Listing 2-14 looks very similar to that shown in Listing 2-9, the primary difference being the use of Yeoman’s NamedBase class. By creating a sub-generator that extends from NamedBase, we alert Yeoman to the fact that this command expects to receive one or more arguments.
With the help of Yeoman’s composeWith() method, simple sub-generators can be combined (i.e., “composed”) with one another to create fairly sophisticated workflows. By taking advantage of this method, developers can create complex, multicommand generators while avoiding the use of duplicate code across commands.
Lastly, it is worth noting that if you are stuck somewhere when developing with Yeoman, or, let us say, your Yeoman generator does not seem to be functioning as desired, there is a built-in troubleshooting command that you can make use of to diagnose and figure out code issues, compatibility issues, and more:
yo doctor
Yeoman is a simple but powerful tool that automates the tedious tasks associated with bootstrapping new applications into existence, speeding up the process by which developers can move from concept to prototype. When used, it allows developers to focus their attention where it matters most—on the applications themselves.
Thousands of Yeoman generators have been published to npm, making it easy for developers to experiment with a wide variety of tools, libraries, frameworks, and design patterns (e.g., Bower, Grunt, AngularJS, Knockout, React) with which they may not have experience.
Yeoman: http://yeoman.io/
Yeoman on npm: www.npmjs.com/package/yo
Do not wait; the time will never be “just right.” Start where you stand, and work with whatever tools you may have at your command, and better tools will be found as you go along.
—George Herbert
Working with processes
Monitoring logs
Monitoring resource usage
Advanced process management
Load balancing across multiple processors
Zero downtime deployments
Node’s package manager (npm) allows users to install packages in one of two contexts: locally or globally. In this example, pm2 is installed within the global context, which is typically reserved for command-line utilities.

Launching the application shown in Listing 3-2 with PM2
Heading | Description |
|---|---|
App name | The name of the process. Defaults to the name of the script that was executed. |
id | A unique ID assigned to the process by PM2. Processes can be referenced by name or ID. |
mode | The method of execution (fork or cluster). Defaults to fork. Explored in more detail later in the chapter. |
pid | A unique number assigned by the operating system to the process. |
status | The current status of the process (e.g., online, stopped, etc.). |
restart | The number of times the process has been restarted by PM2. |
uptime | The length of time the process has been running since last being restarted. |
memory | The amount of memory consumed by the process. |
watching | Indicates whether PM2 will automatically restart the process when it detects changes within a project’s file structure. Particularly useful during development. Defaults to disabled. |

Accessing the sole route defined by our Express application
Figure 3-2 assumes the existence of the curl command-line utility within your environment. If you happen to be working in an environment where this utility is not available, you could also verify the status of this application by opening it directly within your web browser.
Command | Description |
|---|---|
list | Displays an up-to-date version of the table shown in Listing 3-4 |
stop | Stops the process, without removing it from PM2’s list |
restart | Restarts the process |
delete | Stops the process and removes it from PM2’s list |
show | Displays details regarding the specified process |

Viewing details for a specific PM2 process
At this point, you are now familiar with some of the basic steps involved in interacting with PM2. You’ve learned how to create new processes with the help of PM2’s start command. You’ve also discovered how you can subsequently manage running processes with the help of commands such as list, stop, restart, delete, and show. We’ve yet to discuss, however, much of the real value that PM2 brings to the table in regard to managing Node processes. We’ll begin that discussion by discovering how PM2 can assist Node applications in automatically recovering from fatal errors.

Output provided by Node after crashing from the error shown in Listing 3-3
Such behavior won’t get us very far in a real usage scenario. Ideally, an application that has been released to a production environment should be thoroughly tested and devoid from such uncaught exceptions. However, in the event of such a crash, an application should at the very least be able to bring itself back online without requiring manual intervention. PM2 can help us accomplish this goal.

PM2 helps Node applications recover from fatal errors
Notice anything interesting here? Based on the values within the status, restart, and uptime columns, we can see that our application has crashed three times already. Each time, PM2 has helpfully stepped in and restarted it for us. The most recent process has been running for a total of 2 seconds, which means we can expect another crash (and automatic restart) 2 seconds from now.
PM2’s ability to assist applications in recovering from fatal errors in a production environment, while useful, is just one of several useful features the utility provides. PM2 is also equally useful within development environments, as we’ll soon see.
Imagine a scenario in which you’ve recently begun work on a new Node project. Let’s assume it’s a web API built with Express. Without the help of additional tools, you must manually restart the related Node process in order to see the effects of your ongoing work—a frustrating chore that quickly grows old. PM2 can assist you in this situation by automatically monitoring the file structure of your project. As changes are detected, PM2 can automatically restart your application for you, if you instruct it to do so.

Creating a new PM2 process that will automatically restart itself as changes are detected
As changes are saved to this project’s files, subsequent calls to PM2’s list command will indicate how many times PM2 has restarted the application, as seen in a previous example.

Logging incoming requests to Express with morgan
We recently explored how to allow PM2 to manage the execution of this application for us via the start command (see Figure 3-1). Doing so provides us with several benefits, but it also causes us to lose immediate insight into the output being generated by our application to the console. Fortunately, PM2 provides us with a simple mechanism for monitoring such output.

Monitoring the output from processes under PM2’s control

Monitoring the output from a specific process under PM2’s control
Argument | Description |
|---|---|
–raw | Displays the raw content of log files, stripping prefixed process identifiers in the process |
–lines <N> | Instructs PM2 to display the last N lines, instead of the default of 20 |
In the previous section, you learned how PM2 can assist you in monitoring the standard output and errors being generated by processes under its control. In much the same way, PM2 also provides easy-to-use tools for monitoring the health of those processes, as well as for monitoring the overall health of the server on which they are running.

Monitoring CPU and memory usage via PM2’s monit command
The information provided by PM2’s monit command provides us with a quick and easy method for monitoring the health of its processes. This functionality is particularly helpful during development, when our primary focus is on the resources being consumed within our own environment. It’s less helpful, however, as an application moves into a remote, production environment that could easily consist of multiple servers, each running its own instance of PM2.

Enabling PM2’s JSON web API
In this example, we enable PM2’s web-accessible JSON API by calling the utility’s web command. PM2 implements this functionality as part of a separate application that runs independently of PM2 itself. As a result, we can see that a new process, pm2-http-interface, is now under PM2’s control. Should we ever wish to disable PM2’s JSON API, we can do so by removing this process as we would any other, by passing its name (or ID) to the delete (or stop) commands.
Most of this chapter’s focus so far has revolved around interactions with PM2 that occur primarily via the command line. On their own, commands such as start, stop, restart, and delete provide us with simple mechanisms for managing processes in a quick, one-off fashion. But what about more complex scenarios? Perhaps an application requires that additional parameters be specified at runtime, or perhaps it expects that one or more environment variables be set.
Setting | Description |
|---|---|
name | Name of the application. |
cwd | Directory from which the application will be launched. |
args | Command-line arguments to be passed to the application. |
script | Path to the script with which PM2 will launch the application (relative to cwd). |
node_args | Command-line arguments to be passed to the node executable. |
log_date_format | Format with which log timestamps will be generated. |
error_file | Path to which standard error messages will be logged. |
out_file | Path to which standout output messages will be logged. |
pid_file | Path to which the application’s PID (process identifier) will be logged. |
instances | The number of instances of the application to launch. Discussed in further detail in the next section. |
max_restarts | The maximum number of times PM2 will attempt to restart (consecutively) a failed application before giving up. |
max_memory_restart | PM2 will automatically restart the application if the amount of memory it consumes crosses this threshold. |
cron_restart | PM2 will automatically restart the application on a specified schedule. |
watch | Whether or not PM2 should automatically restart the application as changes to its file structure are detected. Defaults to false. |
ignore_watch | An array of locations for which PM2 should ignore file changes, if watching is enabled. |
merge_logs | If multiple instances of a single application are created, PM2 should use a single output and error log file for all of them. |
exec_mode | Method of execution. Defaults to fork. Discussed in further detail in the next section. |
autorestart | Automatically restarts a crashed or exited application. Defaults to true. |
vizon | If enabled, PM2 will attempt to read metadata from the application’s version control files, if they exist. Defaults to true. |
env | Object containing environment variable keys/values to pass to the application. |
The application configuration file shown here provides PM2 with instructions on how to launch each of the applications included within this project. In this example, PM2 is instructed to restart each application if changes are detected to either’s file structure, or if they begin to consume more than 60MB of memory. The file also provides PM2 with separate environment variables to be passed to each process.
Before running this example, you will need to adjust the values for the cwd settings within this file so that they reference the absolute path to the microservices folder on your computer. After making the appropriate adjustments, launch both applications with a single call to PM2, as shown in Figure 3-12.

Launching the main and weather-api applications with PM2

Excerpt of the output generated by PM2’s logs command
The single-threaded, nonblocking nature of Node’s I/O model makes it possible for developers to create applications capable of handling thousands of concurrent connections with relative ease. While impressive, the efficiency with which Node is capable of processing incoming requests comes with one major expense: an inability to spread computation across multiple CPUs. Thankfully, Node’s core cluster module provides a method for addressing this limitation. With it, developers can write applications capable of creating their own child processes—each running on a separate processor, and each capable of sharing the use of ports with other child processes and the parent process that launched it.
Before we close out this chapter, let’s take a look at a convenient abstraction of Node’s cluster module that is provided by PM2. With this functionality, applications that were not originally written to take advantage of Node’s cluster module can be launched in a way that allows them to take full advantage of multiprocessor environments. As a result, developers can quickly scale up their applications to meet increasing demand without immediately being forced to bring additional servers to bear.
The application configuration file shown in Listing 3-9 contains two key items of interest. The first is the instances property . In this example, we specify a value of 0, which instructs PM2 to launch a separate process for every CPU that it finds. The second is the exec_mode property. By specifying a value of cluster, we instruct PM2 to launch its own parent process, which will in turn launch separate child processes for our application with the help of Node’s cluster module.

Launching the application on cluster mode with PM2
When launching applications in cluster mode, PM2 will print a message to the console warning that this functionality is still a beta feature. According to the lead developer of PM2, however, this functionality is stable enough for production environments, so long as Node v0.12.0 or higher is being used.

Monitoring CPU usage with PM2’s monit command
Before you continue, you can quickly remove each of the eight processes launched by this example by running $ pm2 delete multicore.
After launching an application in cluster mode, PM2 will begin forwarding incoming requests in a round-robin fashion to each of the eight processes under its control—providing us with an enormous increase in performance. As an added benefit, having our application distributed across multiple processors also allows us to release updates without incurring any downtime, as we will see in a moment.
Copying the updated source code to the appropriate server(s)
Restarting each of the processes under PM2’s control
As these steps take place, a brief period of downtime will be introduced, during which incoming requests to the application will be rejected—unless special precautions are taken. Fortunately, launching applications with PM2 in cluster mode provides us with the tools we need to take those precautions.
Previous examples have demonstrated the use of PM2’s restart command, which immediately stops and starts a specified process. While this behavior is typically not a problem within nonproduction environments, issues begin to surface when we consider the impact it would have on any active requests that our application may be processing at the moment this command is issued. When stability is of the utmost importance, PM2’s gracefulReload command serves as a more appropriate alternative.
When called, gracefulReload first sends a shutdown message to each of the processes under its control, providing them with the opportunity to take any necessary precautions to ensure that any active connections are not disturbed. Only after a configurable period of time has passed (specified via the PM2_GRACEFUL_TIMEOUT environment variable) will PM2 then move forward with restarting the process.
In this example, after receiving the shutdown message, our application responds by calling the close() method on the HTTP server that was created for us by Express. This method instructs our server to stop accepting new connections, but allows those that have already been established to complete. Only after 10 seconds have passed (as specified via PM2_GRACEFUL_TIMEOUT) will PM2 restart the process, at which point any connections managed by this process should already have been completed.

Gracefully reloading each of the processes under PM2’s control
PM2 provides developers with a powerful utility for managing Node applications that is equally at home in both production and nonproduction environments. Simple aspects, such as the utility’s ability to automatically restart processes under its control as source code changes occur, serve as convenient timesavers during development. More advanced features, such as the ability to load balance applications across multiple processors and to gracefully restart those applications in a way that does not negatively impact users, also provide critical functionality for using Node in a significant capacity.
PM2 Home: http://pm2.keymetrics.io/
It is more productive to think about what is within my control than to worry and fret about things that are outside of my control. Worrying is not a form of thinking.
—Peter Saint-Andre
While JavaScript now plays a far more significant role in web applications, the HTML5 specification (and therefore modern browsers) does not specify a means to detect dependency relationships among scripts or how to load script dependencies in a particular order. In the simplest scenario, scripts are typically referenced in page markup with simple <script> tags. These tags are evaluated, loaded, and executed in order, which means that common libraries or modules are typically included first, then application scripts follow. (For example, a page might load jQuery and then load an application script that uses jQuery to manipulate the Document Object Model [DOM].) Simple web pages with easily traceable dependency hierarchies fit well into this model, but as the complexity of a web application increases, the number of application scripts will grow and the Web of dependencies may become difficult, if not impossible, to manage.
The whole process is made even messier by asynchronous scripts. If a <script> tag possesses an async attribute, the script content will be loaded over HTTP in the background and executed as soon as it becomes available. While the script is loading, the remainder of the page, including any subsequent script tags, will continue to load. Large dependencies (or dependencies delivered by slow sources) that are loaded asynchronously may not be available when application scripts are evaluated and executed. Even if application <script> tags possess async attributes as well, a developer has no means of controlling the order in which all asynchronous scripts are loaded, and therefore no way to ensure that the dependency hierarchy is respected.
The HTML5 <script> tag attribute defer is similar to async but delays script execution until page parsing has finished. Both of these attributes reduce page rendering delays, thereby improving user experience and page performance. This is especially important for mobile devices.
RequireJS was created to address this dependency orchestration problem by giving developers a standard way to write JavaScript modules (“scripts”) that declare their own dependencies before any module execution occurs. By declaring all dependencies up front, RequireJS can ensure that the overall dependency hierarchy is loaded asynchronously while executing modules in the correct order. This pattern, known as Asynchronous Module Definition (AMD), stands in contrast to the CommonJS module-loading pattern adopted by Node.js and the Browserify module-loading library. While there are certainly strong points to be made for using both patterns in a variety of use cases, RequireJS and AMD were developed to address issues specific to web browsers and DOM shortcomings. In reality, the concessions that RequireJS and Browserify make in their implementations are usually mitigated by workflow and community plugins.
For example, RequireJS can create dynamic shims for non-AMD dependencies that it must load (usually remote libraries on content delivery networks or legacy code). This is important because RequireJS assumes that scripts in a web application may come from multiple sources and will not all directly be under a developer’s control. By default, RequireJS does not concatenate all application scripts (“packing”) into a single file, opting instead to issue HTTP requests for every script it loads. The RequireJS tool r.js, discussed later, produces packed bundles for production environments, but can still load remote, shimmed scripts from other locations. Browserify, on the other hand, takes a “pack-first” approach. It assumes that all internal scripts and dependencies will be packed into a single file and that other remote scripts will be loaded separately. This places remote scripts beyond the control of Browserify, but plugins like bromote work within the CommonJS model to load remote scripts during the packing process. For both approaches, the end result is the same: a remote resource is made available to the application at runtime.
This chapter contains a variety of examples that may be run in a modern web browser. Node.js is necessary to install code dependencies and to run all web server scripts.
To install the example code dependencies, open the code/requirejs directory in a terminal and execute the command npm install. This command will read the package.json file and download the few packages necessary to run each example.
The command output shows that the web server is listening at http://localhost:8080. In a web browser, navigating to http://localhost:8080/index.html would render the HTML snippet in Listing 4-1.
The workflow for using RequireJS in a web application typically includes some common steps. First, RequireJS must be loaded in an HTML file with a <script> tag. RequireJS may be referenced as a stand-alone script on a web server or CDN, or it may also be installed with package managers like Bower and npm, then served from a local web server. Next, RequireJS must be configured so that it knows where scripts and modules live, how to shim scripts that are not AMD compliant, which plugins to load, and so on. Once configuration is complete, RequireJS will load a primary application module that is responsible for loading the major page components, essentially “kicking off” the page’s application code. At this point RequireJS evaluates the dependency tree created by modules and begins asynchronously loading dependency scripts in the background. Once all modules are loaded, the application code proceeds to do whatever is within its purview.
Each step in this process is given detailed consideration in the following sections. The example code used in each section represents the evolution of a simple application that will show inspirational and humorous quotes by (semi-)famous persons.
The RequireJS script may be downloaded directly from http://requirejs.org . It comes in a few distinct flavors: a vanilla RequireJS script, a vanilla RequireJS script prebundled with jQuery, and a Node.js package that includes both RequireJS and its packing utility, r.js. For most examples in this chapter, the vanilla script is used. The prebundled jQuery script is merely offered as a convenience for developers. If you wish to add RequireJS to a project that is already using jQuery, the vanilla RequireJS script can accommodate the existing jQuery installation with no issues, though older versions of jQuery may need to be shimmed. (Shimmed scripts will be covered later.)
If you are working with CoffeeScript, RequireJS also comes with a plugin for CS integration. Internationalization plugin is also available and can be downloaded directly from https://requirejs.org .
After the RequireJS script is loaded on a page, it looks for a configuration which will primarily tell RequireJS where script and modules live. Configuration options can be provided in one of three ways.
First, a global require object may be created before the RequireJS script is loaded. This object may contain all of the RequireJS configuration options as well as a “kickoff” callback that will be executed once RequireJS has finished loading all application modules.
The most important configuration property on this object, baseUrl, identifies a path relative to the application root where RequireJS should begin to resolve module dependencies. The deps array specifies modules that should be loaded immediately after configuration, and the callback function exists to receive these modules once they are loaded. This example loads a single module, quotes-view. Once the callback is invoked, it may access the properties and methods on this module.
Notice that the absolute path and file extension for the quotes-view module is omitted in the deps array. By default, RequireJS assumes that any given module is located relative to the page being viewed and that it is contained within a single JavaScript file with the appropriate file extension. In this case the latter assumption is true but the first is not, which is why specifying a baseUrl property is necessary. When RequireJS attempts to resolve any module, it will combine any configured baseUrl value and the module name, then append the .js file extension to produce a full path relative to the application root.
When the config01.html page loads, the strings passed to the quotesView.addQuote() method will be displayed on the page.
In this example a <script> block first uses the global requirejs object, created by the require.js script, to configure RequireJS by invoking its config() method. It then invokes requirejs to kick off the application. The object passed to the config() method resembles the global require object from Listing 4-4, but lacks its deps and callback properties. The requirejs function accepts an array of application dependencies and a callback function instead, a pattern that will become very familiar when module design is covered later.
The net effect is the same: RequireJS uses its configuration to load the quotes-view module, and once loaded, the callback function interacts with it to affect the page.
Because the data-main script is loaded asynchronously, scripts or <script> blocks included immediately after RequireJS will likely be run first. If RequireJS manages all scripts in an application, or if scripts loaded after RequireJS have no bearing on the application itself (such as advertiser scripts), there will be no conflicts.
A module name
A list of dependencies (modules)
A module closure that will accept the output from each dependency module as function arguments, set up module code, and potentially return something that other modules can use
A module’s name is key. In Listing 4-9 a module name, m1, is explicitly declared. If a module name is omitted (leaving the dependencies and module closure as the only arguments passed to define()), then RequireJS will assume that the name of the module is the file name containing the module script, without its .js extension. This is fairly common in practice, but the module name is shown here for clarity.
Giving modules specific names can introduce unwanted complexity, as RequireJS depends on script URL paths for loading modules. If a module is explicitly named and the file name does not match the module name, then a module alias that maps the module name to an actual JavaScript file needs to be defined in the RequireJS configuration. This is covered in the next section.
The dependency list in Listing 4-9 identifies two other modules that RequireJS should load. The values d1 and d2 are the names of these modules, located in script files d1.js and d2.js. These scripts look similar to the module definition in Listing 4-9, but they will load their own dependencies.
Finally, the module closure accepts the output from each dependency module as function arguments. This output is any value returned from each dependency module’s closure function. The closure in Listing 4-9 returns its own value, and if another module were to declare m1 as a dependency, it is this returned value that would be passed to that module’s closure.
If a module has no dependencies, its dependency array will be empty and it will receive no arguments to its closure.
Once a module is loaded, it exists in memory until the application is terminated. If multiple modules declare the same dependency, that dependency is loaded only once. Whatever value it returns from its closure will be passed to both modules by reference. The state of a given module, then, is shared among all other modules that use it.
A module may return any valid JavaScript value, or none at all if the module exists only to manipulate other modules or simply produce side effects in the application.

RequireJS dependency tree
As application dependencies multiply, module pathing can become tedious, but there are two ways to mitigate this.
First, a module may use leading dot notation to specify dependencies relative to itself. For example, a module with the declared dependency ./foo would load foo.js as a sibling file, located on the same URL segment as itself, whereas a module with the dependency ../bar would load bar.js one URL segment “up” from itself. This greatly reduces dependency verbosity.
Second, modules may be named with path aliases, defined in the RequireJS configuration, as described in the next section.
Assigning an alias to a module allows other modules to use the alias as a dependency name instead of the full module pathname. This can be useful for a variety of reasons but is commonly used to simplify vendor module paths, eliminate version numbers from vendor module names, or deal with vendor libraries that declare their own module names explicitly.
It is unlikely that jquery lives at the module root defined by the baseUrl configuration, however. It is more likely that the jquery script would exist within a vendor directory such as /scripts/vendor/jquery, and that the script name would contain the jQuery version (e.g., jquery-2.1.3.min), as this is how jQuery scripts are distributed. To further complicate matters, jQuery explicitly declares its own module name, jquery. If a module attempted to load jquery using the full path to the jQuery script, /scripts/vendor/jquery/jquery-2.1.3.min, RequireJS would load the script over HTTP and then fail to import the module because its declared name is jquery, not jquery-2.1.3.min.
Explicitly naming modules is considered bad practice because application modules must use a module’s declared name, and the script file that contains the module must either share its name or be aliased in the RequireJS configuration. A special concession is made for jQuery because it is a fairly ubiquitous library.
Because module aliases take precedence over actual module locations, RequireJS will resolve the location of the jQuery script before attempting to locate it at /scripts/jquery.js.
Anonymous modules (that do not declare their own module names) may be aliased with any module name, but if named modules are aliased (like jquery), they must be aliased with their declared module names.
Libraries such as jQuery, Underscore, Lodash, Handlebars, and so forth all have plugin systems that let developers extend the functionality of each. Strategic use of module aliases can actually help developers load extensions for these libraries all at once, without having to specify such extensions in every module that makes use of them.
The jquery-all proxy module returns the jQuery object itself, which allows modules that depend on jquery-all to access jquery with the loaded custom extensions. By default, jQuery registers itself with the global window object, even when it is used as an AMD module. If all application modules are accessing jQuery through the jquery-all module (or even the plain jquery module, as most vendor libraries do), then there is no need for the jQuery global. It may be removed by invoking $.noConflict(true). This will return the jquery object and is the alternate return value for the jquery-all module in Listing 4-15.
In this case, even though jquery-plugin-1 and jquery-plugin-2 do not return values, they must still be added as dependencies so that their side effects—adding plugins to the jquery module—still occur.
Libraries that support the AMD module format are straightforward to use with RequireJS. Non-AMD libraries may still be used by configuring RequireJS shims or by creating shimmed modules manually.
The data/quotes module in example-003 exposes a groupByAttribution() method that iterates over the collection of quotes. It creates a hash where keys are the names of people and values are arrays of quotes attributed to them. This grouping functionality would likely be useful for other collections as well.
Fortunately, a vendor library, undrln , can provide a generalized version of this functionality, but it is not AMD compatible. A shim would be necessary for other AMD modules to use undrln as a dependency. Undrln is written as a standard JavaScript module within a function closure, shown in Listing 4-18. It assigns itself to the global window object, where it may be accessed by other scripts on a page.
The undrln.js script blatantly mimics a subset of the Lodash API without AMD module compatibility, exclusively for this chapter’s examples.
By shimming non-AMD scripts, RequireJS can use its asynchronous module-loading capabilities behind the scenes to load non-AMD scripts when they are dependencies of other AMD modules. Without this capability these scripts would need to be included on every page with a standard <script> tag and loaded synchronously to ensure availability.

RequireJS modules shown loaded in Chrome
It is reasonable to expect shimmed scripts to have dependencies, likely objects in the global scope. When AMD modules specify dependencies, RequireJS ensures that the dependencies are loaded first, before the module code is executed. Dependencies for shimmed scripts are specified in a similar manner within the shim configuration. A shimmed script may depend on other shimmed scripts, or even AMD modules if those modules make content available in the global scope (usually a bad idea, but sometimes necessary).
To enhance the example application, a search field has been added to the quote page in example-005. Terms entered into the search field appear highlighted in the text of any quote in which they are found. Up to this point, all examples have used a single view, quotes-view, to display the rendered markup. Because the application features are growing, two new modules will be introduced to help manage features: search-view and quotes-state. The search-view module is responsible for monitoring a text field for user input. When this field changes, the view informs the quotes-state module that a search has occurred, passing it the search term. The quotes-state module acts as the single source of state for all views, and when it receives a new search term, it triggers an event to which views may subscribe.
Digging through some legacy source code produced the file public/scripts/util/jquery.highlight.js, a non-AMD jQuery plugin that highlights text in the DOM. When the quotes-view module receives the search event from the quotes-state module, it uses this plugin to highlight text in the DOM based on the search term stored in quotes-state. To use this legacy script, a path and a shim entry are both added to the main.js configuration. The highlight plugin doesn’t export any values, but it does need jQuery to be loaded first or the plugin will throw an error when it attempts to access the global jQuery object.
Once the highlight plugin has been shimmed, it may be loaded as a dependency of another module. Since the jquery-all module is responsible for loading custom plugins anyway, making the highlight module one of its dependencies in Listing 4-23 seems sensible.
Other shimmed scripts that execute immediately and potentially create one or more reusable variables or namespaces in the global scope
AMD modules that also create reusable variables or namespaces in the global scope (such as window.jQuery) as a side effect
Since both the highlight and jquery-all modules declare jquery as a dependency, when is jQuery actually loaded?
Why isn’t a second highlight parameter specified in the jquery-all module closure function?
First, when RequireJS evaluates dependencies among modules, it creates an internal dependency tree based on module hierarchy. By doing this it can determine the optimal time to load any particular module, starting from the leaves and moving toward the trunk. In this case the “trunk” is the jquery-all module, and the furthest leaf is the jquery module on which highlight depends. RequireJS will execute module closures in the following order: jquery, highlight, jquery-all. Because jquery is also a dependency of jquery-all, RequireJS will simply deliver the same jquery instance created for the highlight module.
Second, the highlight module returns no value and is used merely for side effects—for adding a plugin to the jQuery object. No parameter is passed to the jquery-all module because highlight returns none. Dependencies that are used only for side effects should always be placed at the end of a module’s dependency list for this reason.
There are several RequireJS loader plugins that are so useful, they find a home in most projects. A loader plugin is an external script that is used to conveniently load, and sometimes parse, specific kinds of resources that may then be imported as standard AMD dependencies, even though the resources themselves may not be actual AMD modules.
The RequireJS text plugin can load a plain text resource over HTTP, serialize it as a string, and deliver it to an AMD module as a dependency. This is commonly used to load HTML templates or even raw JSON data from HTTP endpoints. To install the plugin, the text.js script must be copied from the project repository and, by convention, placed in the same directory as the main.js configuration file. (Alternative installation methods are listed in the plugin project’s README.)
It is not necessary to be completely familiar with Handlebars syntax to understand that this template iterates over the data object, pulling out each attribution and its associated quotes. It creates an <h2> element for the attribution, then for each quote builds a <blockquote> element to hold the quote text. A special block helper, #explode, breaks the quote text apart at the new line (\n) delimiter and then wraps each segment of the quote text in a <p> tag.
The #explode helper is significant because it is not native to Handlebars. It is defined and registered as a Handlebars helper in the file public/scripts/util/handlebars-all.js, as shown in Listing 4-26.
Because this module adds helpers and then returns the Handlebars object, the quotes-view module will import it as a dependency instead of the vanilla Handlebars module, in much the same way as the jquery-all module is used in lieu of jquery. The appropriate module alias has been added to the configuration in Listing 4-27.
When RequireJS encounters a dependency name with the text! prefix, it automatically attempts to load the text.js plugin script, which will then load and serialize the specified file content as a string. The quotesTemplate function argument in the quotes-view closure will contain the serialized content of the quotes.hbs file, which is then compiled by Handlebars and used to render the module in the DOM.
When a web page has fully loaded, it triggers a DOMContentLoaded event (in modern browsers). Scripts that are loaded before the browser has finished building the DOM often listen for this event to know when it is safe to begin manipulating page elements. If scripts are loaded just before the ending </body> tag, they may assume that the bulk of the DOM has already been loaded and that they need not listen for this event. Scripts anywhere else in the <body> element, or more commonly the <head> element, have no such luxury, however.
The domReady plugin is a peculiar kind of “loader” in that it simply stalls the invocation of a module’s closure until the DOM is completely ready. Like the text plugin, the domReady.js file must be accessible to RequireJS within the baseUrl path defined in the main.js configuration. By convention it is typically a sibling of main.js.
RequireJS supports internationalization via the i18n loader plugin. (i18n is a numeronym, which means that the number “18” represents the 18 characters between “i” and “n” in the word “internationalization”.) Internationalization is the act of writing a web application such that it can adapt its content to a user’s language and locale (also known as National Language Support, or NLS). The i18n plugin is primarily used for translating text in a web site’s controls and “chrome”: button labels, headers, hyperlink text, fieldset legends, and so forth. To demonstrate this plugin’s capabilities, two new templates have been added to the example application, one for the page title in the header and one for the search field with placeholder text. The actual quote data will not be translated because, presumably, it comes from an application server that would be responsible for rendering the appropriate translation. In this application, though, the data is hard-coded in the data/quotes module for simplicity and will always appear in English.
First, a root property holds the key/value pairs that will be used to fetch translated data when the plugin resolves the language translations. The keys in this object are simply keys by which the translated text may be accessed programmatically. In the search template, for example, {{searchPlaceholder}} will be replaced with the string value at the language object’s key searchPlaceholder when the template is bound to it.
Second, siblings to the root property are the various IETF language tags for active and inactive translations that should be resolved based on a browser’s language setting. In this example, the German de language tag is assigned the value true. If a Spanish translation was made available, an es-es property with the value true could be added. And for a French translation, an fr-fr property could be added, and so forth for other languages.
When the default language module is loaded with the i18n plugin, it examines the browser’s window.navigator.language property to determine what locale and language translation should be used. If the default language module specifies a compatible, enabled locale, the i18n plugin loads the locale-specific module and then merges it with the default language module’s root object. Missing translations in the locale-specific module will be filled with values from the default language module.

Switching the browser language loads the German translation.
The window.navigator.language property is affected by different settings in different browsers. For example, in Google Chrome it only reflects the user’s language setting, whereas in Mozilla Firefox it can be affected by an Accept-Language header in a page’s HTTP response as well.
Application servers often cache resources like script files, images, stylesheets, and so on to eliminate unnecessary disk access when serving a resource that has not changed since it was last read. Cached resources are often stored in memory and associated with some key, usually the URL of the resource. When multiple requests for a given URL occur within a specified cache period, the resource is fetched from memory using the key (URL). This can have significant performance benefits in a production environment, but invalidating cache in development or testing environments every time a code change is made, or a new resource is introduced, can become tedious.
Certainly caching can be toggled on a per-environment basis, but a simpler solution, at least for JavaScript (or any resource loaded by RequireJS), might be to utilize the RequireJS cache-busting feature. Cache busting is the act of mutating the URL for every resource request in such a way that the resource may still be fetched, but will never be found in cache because its “key” is always different. This is commonly done by including a query string parameter that changes whenever a page is reloaded.

The bust parameter is appended to each RequireJS request
While the usefulness of this feature is evident, it can also create a few problems.
First, RequireJS respects HTTP cache headers, so even if urlArgs is used as a cache-busting mechanism, RequireJS may still request (and receive) a cached version of a resource, depending on how cache is implemented. If possible, always serve the appropriate cache headers in each environment.
Second, be aware that some proxy servers drop query string parameters. If a development or staging environment includes proxies to mimic a production environment, a cache-busting query string parameter may be ineffective. Some developers use urlArgs to specify particular resource versions in a production environment (e.g., version=v2), but this is generally discouraged for this very reason. It is an unreliable versioning technique, at best.
Finally, some browsers treat resources with different URLs as distinct, debuggable entities. In Chrome and Firefox, for example, if a debug breakpoint is set in the source code for http://localhost:8080/scripts/quotes-state.js?bust=1432504595280, it will be removed if the page is refreshed, when the new resource URL becomes http://localhost:8080/scripts/quotes-state.js?bust=1432504694566. Resetting breakpoints can become tedious, and though the debugger keyword can be used to circumvent this problem by forcing the browser to pause execution, it still requires a diligent developer to ensure that all debugger breakpoints are removed before code is promoted to production.
The RequireJS optimizer, r.js, is a build tool for RequireJS projects. It can be used to concatenate all RequireJS modules into a single file, minify source code, copy build output to a distinct directory, and much more. This section introduces the tool and its basic configuration. Specific examples for several common scenarios will be covered next.
The most common way to use r.js involves installing the RequireJS npm package for Node.js, either as a global package or as a local project package. The examples in this section will use the local RequireJS installation created when all npm modules were installed.
A wide array of parameters may be passed as arguments to the r.js tool to control its behavior. Fortunately these parameters can also be passed to r.js in a regular JavaScript configuration file, which makes the terminal command significantly shorter. For nontrivial projects, this is the preferred configuration method and will be the only one covered in this chapter.
The code files in the example-010 directory have been moved into a standard src directory, and a new file, rjs-config.js, has been placed in the directory root. This file, unsurprisingly, contains the r.js configuration. Its contents are shown in Listing 4-38.
Developers who are familiar with build tools will immediately recognize the input/output pattern present in the configuration.
The appDir property specifies the project “input” directory, relative to the configuration file, where uncompiled source code lives.
The dir property specifies the project “output” directory, relative to the configuration file, where compiled and minified output will be written when the r.js tool runs.
The baseUrl property tells r.js where the project scripts are located relative to the appDir property. This should not be confused with the baseUrl property in the main.js file, which tells RequireJS where modules are located relative to the web application root.
The mainConfigFile property points to the actual RequireJS (not r.js) configuration. This helps r.js understand how modules are related to each other, and what module aliases and shims exist, if any. It is possible to omit this property and specify all of these paths in the r.js configuration, though that is beyond the scope of this example.
Setting the inlineText property to true ensures that all text files referenced with the text plugin prefix text! will be compiled with RequireJS modules in the final build output. This option is enabled by default but is explicitly set in this project for clarity.
By default, r.js will minify and copy all scripts (packed and unpacked) to the output directory. The removeCombined property toggles this behavior. In this case only the packed, compiled script(s) and any other scripts that could not be included in the packed output will be copied to the output directory.
The modules array lists all of the top-level modules to be compiled. Because this is a single-page application, only the actual main module needs to be compiled.
Finally, the optimize property instructs r.js to apply an uglify transform to all scripts, minimizing all JavaScript code.
Several things immediately stand out in the public/scripts directory.
First, the require.js and main.js scripts are both present. Since these scripts are the only files referenced in index.html, their presence here is expected. Other scripts such as the quotes-view.js and quotes-state.js scripts are noticeably absent, but examining the content of main.js reveals why: they have been packed and minified according to the r.js build settings.
Second, the localization file nls/lang.js is now missing because it has been included as part of main.js. The nls/de/lang.js script still remains as part of the build output, though its contents have been minified. Any user browsing the example web page in the default locale will receive an optimized experience, as RequireJS will not have to make an external AJAX call to load the default language translations. Users from Germany will incur the additional HTTP request because the German localization file has not been included in the packed output. This is a limitation of the localization plugin that r.js must respect.
Third, the Handlebars templates, though compiled as part of the build output in main.js, have also been copied to the public/scripts/templates directory. This happens because RequireJS plugins currently have no visibility into the build process and therefore no method of honoring the removeCombined option in the r.js configuration file. Fortunately, because these templates have been wrapped in AMD modules and concatenated with main.js, RequireJS will not attempt to load them with AJAX requests. If deployment size is an issue for this project, a post-build script or task can be created to remove the templates directory if needed.
Fourth, the vendor/ventage directory has been copied to the build directory even though its core module, ventage.js, has been concatenated with main.js. While RequireJS can automatically remove individual module files (like ventage.js) after compilation, it will not clean up other files associated with a module (in this case, unit tests and package definition files like package.json and bower.json), so they must be removed manually or as part of a post-build process.
RequireJS is a very pragmatic JavaScript module loader that works well in a browser environment. Its ability to load and resolve modules asynchronously means that it does not rely solely on bundling or packing scripts for performance benefits. For further optimization, though, the r.js optimization tool may be used to combine RequireJS modules into a single, minified script to minimize the number of HTTP requests necessary to load modules and other resources.
Though RequireJS modules must be defined in AMD format, RequireJS can shim non-AMD scripts so that legacy code may be imported by AMD modules where necessary. Shimmed modules may also have dependencies that can automatically be loaded by RequireJS.
The text plugin lets modules import external text file dependencies (such as templates) as strings. These text files are loaded like any other module dependency and may even be inlined in build output by the r.js optimizer.
Localization is supported by the i18n module loader, which can dynamically load text translation modules based on a browser’s locale settings. While the primary locale translation module can be optimized and concatenated with r.js, additional locale translation modules will always be loaded with HTTP requests.
Module execution can be deferred by the pageLoad plugin, which prevents a module’s closure from executing until the DOM has been fully rendered. This can be an effective way to eliminate repeat calls to jQuery’s ready() function, or fumbling through the cross-browser code necessary to subscribe to the DOMContentLoaded event manually.
Finally, the RequireJS configuration can automatically append query string parameters to all RequireJS HTTP requests, providing a cheap but effective cache-busting feature for development environments.
Less is more.
—Ludwig Mies van der Rohe
Browserify is a JavaScript module loader that works around the language’s current lack of support for importing modules within the browser by serving as a “pre-processor” for your code. In much the same way that CSS extensions such as SASS and LESS have brought enhanced syntax support to stylesheets, Browserify enhances client-side JavaScript applications by recursively scanning their source code for calls to a global require() function. When Browserify finds such calls, it immediately loads the referenced modules (using the same require() function that is available within Node.js) and combines them into a single, minified file—a “bundle”—that can then be loaded within the browser.
This simple but elegant approach brings the power and convenience of CommonJS (the method by which modules are loaded within Node.js) to the browser while also doing away with the additional complexity and boilerplate code required by Asynchronous Module Definition (AMD) loaders such as RequireJS (described in Chapter 4).
Distinguish between AMD and CommonJS module loaders
Create modular front-end JavaScript applications that follow the simple patterns for module management popularized by tools such as Node.js
Visualize a project’s dependency tree
Compile your application as quickly as possible—as changes are made—using Browserify’s sister application, Watchify
Use third-party Browserify plugins (“transforms”) to extend the tool beyond its core functionality
Portions of this chapter discuss concepts already covered in this book’s previous chapters.
The AMD API is both clever and effective, but many developers also find it to be a bit clumsy and verbose. Ideally, JavaScript applications should be capable of referencing external modules without the added complexity and boilerplate code that the AMD API requires. Fortunately, a popular alternative known as CommonJS exists that addresses this concern.
While most people tend to associate JavaScript with web browsers, the truth is that JavaScript has found widespread use in a number of other environments for quite some time—well before Node.js came on the scene. Examples of such environments include Rhino, a server-side runtime environment created by Mozilla, and ActionScript, a derivative used by Adobe’s once-popular Flash platform that has fallen out of favor in recent years. Each of these platforms works around JavaScript’s lack of built-in module support by creating its own approach.
Sensing a need for a standard solution to this problem, a group of developers got together and proposed what became known as CommonJS, a standardized approach to defining and using JavaScript modules. Node.js follows a similar approach, as does the next major update to JavaScript (ECMAScript 6, a.k.a. ES6 Harmony). This approach can also be used to write modular JavaScript applications that work in all web browsers in use today, although not without the help of additional tools such as Browserify, the subject of this chapter.
Node’s package manager (npm) allows users to install packages in one of two contexts: locally or globally. In this example, browserify is installed within the global context, which is typically reserved for command-line utilities.
Unlike our RequireJS-based example, this application cannot be run directly within the browser because the browser lacks a built-in mechanism for loading modules via require(). Before the browser can understand this application, we must first compile it into a bundle with the help of the browserify command-line utility or via Browserify’s API.

Visualizing the advanced project’s dependency tree
Viewing this chart as a static rendering on a page really does not do it justice. For the full effect, you should compile the project and view the chart within your browser by running npm start from within the project’s folder. Doing so will allow you to hover your mouse over the various segments of the chart, each of which represents a dependency encountered by Browserify during its compilation process. While it is not evident in Figure 5-1, an in-depth analysis of the chart indicates that our application’s custom code accounts for only a tiny sliver (9.7kB) of the total size of the bundle generated by Browserify. The vast majority of this project’s nearly 2MB of code consists of third-party dependencies (e.g., Angular, jQuery, Lodash, etc.), an important fact that will be referenced again later in the chapter.
You may also be interested in investigating the browserify-graph and colony command-line utilities (also available via npm), which you can use to generate additional visualizations of a project’s dependency tree.
Projects that take advantage of Browserify cannot be run directly within the browser—they must first be compiled. In order to make the most efficient use of the tool, it is important that projects be set up in such a way as to automatically trigger this step as changes occur within their source code. Let’s take a look at two methods by which this can be achieved.
A Browserify bundle was immediately created.
A web server was launched to host the project.
A watch script was executed that triggers the creation of new Browserify bundles as source code changes are detected.
This simple approach typically serves most small projects quite well; however, as small projects gradually evolve into large projects, developers often grow frustrated, understandably, with the ever-increasing build times that accompany it. Having to wait several seconds before you can try out each of your updates can quickly destroy any sense of “flow” that you might hope to achieve. Fortunately, Browserify’s sister application, Watchify, can help us in these situations.
If Browserify (which compiles applications in their entirety) can be thought of as a meat cleaver, Watchify can be thought of as a paring knife. When invoked, Watchify initially compiles a specified application in its entirety; however, rather than exiting once this process has completed, Watchify continues to run, watching for changes to a project’s source code. As changes are detected, Watchify recompiles only those files that have changed, resulting in drastically faster build times. Watchify accomplishes this by maintaining its own internal caching mechanism throughout each build.
In this example, we wrap our browserify instance with watchify. Afterward, we recompile the project as needed by subscribing to the update event emitted by our wrapped instance.
In the earlier section “Visualizing the Dependency Tree,” we looked at an interactive chart that allowed us to visualize the various dependencies encountered by Browserify as it compiled this chapter’s advanced project (see Figure 5-1). One of the most important facts that we can take away from this chart is that the project’s custom code (found in /app) accounts for only a tiny portion (9.7kB) of the bundle’s total size of 1.8MB. In other words, the vast majority of this project’s code consists of third-party libraries (e.g., Angular, jQuery, Lodash, etc.) that are unlikely to frequently change. Let’s take a look at how we can use this knowledge to our advantage.
/dist/vendor.js: Third-party dependencies
/dist/app.js: Custom application code
By taking this approach, browsers can more efficiently access project updates as they are released. In other words, as changes occur within the project’s custom code, browsers only need to redownload /dist/app.js. Contrast this approach with that of the advanced project, in which each update (no matter how small) forces clients to redownload the project’s nearly 2MB bundle.
To see this process in action, navigate to the extracted project in your terminal and run $ npm start. Any missing npm modules will be installed, and the project’s default Grunt task will be run. As this process occurs, two separate bundles will be created. The bundle containing the project’s custom code, /dist/app.js, comes in at only 14kB in size.
As mentioned in this chapter’s introduction, Browserify compiles a project by recursively scanning its source code in search of calls to a global require() function. As these calls are found, Browserify loads the modules they reference via the same require() function used by Node. Afterward, Browserify merges them into a single bundle that browsers are capable of understanding.
In this regard, projects that use Browserify are best thought of as client-side Node applications. Many aspects of Browserify that tend to confuse newcomers are more readily understood when this concept—along with everything that it entails—is kept in mind. Let’s take a look at two such aspects now: module resolution and dependency management.
In situations such as this, Node will first attempt to locate the referenced module within its core library. This process can be seen in action when loading modules such as fs, Node’s file system module. If no match is found, Node will then proceed to search for folders named node_modules, starting with the location of the module that called require() and working its way upward through the file system. As these folders are encountered, Node will check to see if they contain a module (or package) matching that which was requested. This process will continue until a match is found, and if none is found, an exception is thrown.
This simple yet powerful method by which module resolution occurs within Node revolves almost exclusively around the node_modules folder. However, Node provides an often-overlooked method that allows developers to augment this behavior by defining additional folders within which Node should be allowed to search for modules, should the previous steps turn up empty-handed. Let’s take a look at this chapter’s path-env project, which demonstrates how this can be accomplished.
On OS X and Linux, environment variables are set from the terminal by running export ENVIRONMENT_VARIABLE=value. The command to be used within the Windows command line is set ENVIRONMENT_VARIABLE=value.
Take note of this example’s lack of relative module references. For example, notice how this project’s main script, bin/index.js, is able to load a custom module responsible for initializing Express via require('app/api');. The alternative would be to use a relative path: require('../lib/app/api');. Anyone who has worked within complex Node applications and encountered module references along the line of require('../../../../models/animal'); will quickly come to appreciate the increase in code clarity that this approach affords.
It is important to bear in mind that the use of the NODE_PATH environment variable only makes sense within the context of a Node (or Browserify) application—not a package. When creating a reusable package that is intended to be shared with others, you should rely solely on Node’s default module resolution behavior.
Thus far, we have focused on how the NODE_PATH environment variable can have a positive impact on server-side Node applications. Now that we have laid that groundwork, let’s see how this concept can be applied within the context of client-side, browser-based applications compiled with Browserify.
As a result of the paths value that was provided to Browserify, our application can now reference this module from any location by simply calling require('app/utils');.
Up until quite recently, the notion of “dependency management” has (for the most part) been a foreign concept within the context of client-side, browser-based projects. The tide has swiftly turned, however, thanks in large part to the rapidly increasing popularity of Node, along with additional utilities built on top of it—a few of which this book has already covered (e.g., Grunt and Yeoman). These utilities have helped to bring desperately needed tooling and guidance to the untamed, “Wild West” that once was (and largely still is) client-side development.
In regard to dependency management, Bower has helped address this need by providing client-side developers with an easy-to-use mechanism for managing the various third-party libraries that applications rely on. For developers who are new to this concept and are not using client-side compilers such as Browserify, Bower has always been and continues to be a viable option for managing a project’s dependencies; however, as developers begin to see the advantages afforded by tools such as Browserify, Bower has begun to show signs of age. Furthermore, Bower is now almost defunct and is no longer under rapid development. As such, more and more developers are migrating away from Bower and turning toward alternative solutions, Browserify being one. Of course, Bower can still be used and is being used by many JS projects, but relying heavily on Bower is not something that is recommended.
At the beginning of this section, we mentioned that projects using Browserify are best thought of as client-side Node applications. In regard to dependency management, this statement is particularly important. Recall that during Browserify’s compile process, a project’s source code is scanned for calls to a global require() function. When found, these calls are executed within Node, and the returned value is subsequently made available to the client-side application. The important implication here is that when using Browserify, dependency management is significantly simplified when developers rely solely on npm, Node’s package manager. While technically, yes, it is possible to instruct Browserify on how to load packages installed by Bower, more often than not, it’s simply more trouble than it’s worth.
Consider a scenario in which you would like to create a new module, which you intend to publish and share via npm. You want this module to work both within Node and within the browser (via Browserify). To facilitate this, Browserify supports the use of a browser configuration setting within a project’s package.json file. When defined, this setting allows developers to override the location used to locate a particular module. To better understand how this works, let’s take a look at two brief examples.
As in Listing 5-17, a module that implements this pattern will expose distinct entry points into itself: one for Node and a separate one for applications compiled via Browserify. This example takes this concept a step further, however. As this module is compiled, should it ever attempt to load the module located at lib/extra.js , the module located at lib/extra-browser will be substituted instead. In this way, the browser setting allows us to create modules with behavior that can vary greatly depending on whether those modules are run within Node or within the browser.
Developers can build upon Browserify’s core functionality by creating plugins, called transforms, that tap into the compilation process that occurs as new bundles are created. Such transforms are installed via npm and are enabled once their names are included within the browserify.transform array in an application’s package.json file. Let’s take a look at a few useful examples.
The brfs transform simplifies the process of loading file contents inline. It extends Browserify’s compilation process to search for calls to the fs.readFileSync() method . When found, the contents of the referenced file are immediately loaded and returned.
As in the previous example, the package.json file for this project has been modified to include folderify within its browserify.transform array. When compiled, Browserify will search for references to the include-folder module. When the function it returns is called, Browserify will load the contents of each file it finds within the specified folder and return them in the form of an object.
This particular example demonstrates the use of Browserify within the context of an Angular application. If you are unfamiliar with Angular (covered in Chapter 7), don’t worry—the important aspect of this example is the manner in which the bulk() method allows us to require() multiple modules matching one or more specified patterns (in this case, routes/**/route.js).

File structure for this chapter’s transforms-bulkify project
With the browserify-shim transform installed and configured, the module located at app/vendor/foo.js can now be properly imported via require().
Browserify is a powerful utility that extends the intuitive process by which modules are created and imported within Node to the browser. With its help, browser-based JavaScript applications can be organized as a series of small, easy-to-understand, and tightly focused modules that work together to form a larger and more complicated whole. What’s more, there is nothing preventing applications that currently have no module management system in place from putting Browserify to use right away. The process of refactoring a monolithic application down into smaller components is not an overnight process and is best taken one step at a time. With the help of Browserify, you can do just that—as time and resources allow.
Browserify: http://browserify.org
Browserify transforms: https://github.com/substack/node-browserify/wiki/list-of-transforms
Watchify: https://github.com/substack/watchify
Complex systems are characterized by simple elements, acting on local knowledge with local rules, giving rise to complicated, patterned behavior.
—David West
Knockout is a JavaScript library concerned with binding HTML markup to JavaScript objects. It is not a full framework. It has no state router, HTTP AJAX capability, internal message bus, or module loader. Instead, it focuses on two-way data binding between JavaScript objects and the DOM. When the data in a JavaScript application changes, HTML elements bound to Knockout views receive automatic updates. Likewise, when DOM input occurs—through form field manipulation, for example—Knockout captures the input changes and updates the application state accordingly.
In place of low-level, imperative HTML element manipulation, Knockout uses specialized objects called observables and a custom binding syntax to express how application data relates to markup. The internal mechanics are fully customizable so developers can extend Knockout’s capabilities with custom binding syntax and behaviors.
As an independent JavaScript library, Knockout has no dependencies. The presence of other libraries is often required to fulfill the application functions that Knockout does not perform, however, so it plays well with many other common libraries like jQuery, Underscore, Q, and so on. The Knockout API represents data binding operations at a much higher level than strict DOM manipulation, and so places Knockout closer to Backbone or Angular in terms of abstraction, but its slim, view-oriented feature set means it has a far smaller footprint.
Knockout is fully functional in all modern browsers and, as of this writing, extends back to cover Firefox 3.5+, Internet Explorer 6+, and Safari 6+. Its backward compatibility is especially impressive in light of its newest feature, HTML5-compatible components with custom markup tags. The Knockout team has taken pains to make the Knockout development experience seamless in a variety of browser environments.
To run examples, first install Node.js (refer to the Node.js documentation for your system) and then run npm install in the knockout directory to install all example code dependencies. Each example directory will contain an index.js file that runs a simple Node.js web server. To run each example, it will be necessary to launch this server and then navigate to a specified URL in a web browser. For example, to run the index.js file in Listing 6-1, navigate to the knockout/example-000 directory at a terminal prompt and run node index.js.
All example pages include the core Knockout script in a <script> tag reference. You can download this script from http://knockoutjs.com or from one of a number of reputable content delivery networks. Knockout can also be installed as a Bower package or npm module and is both AMD and CommonJS compatible. The Knockout documentation contains detailed instructions for all of these installation methods.
Knockout distinguishes between two sources of information in an application’s user interface: the data model , which represents the state of the application, and the view model, which represents how that state is displayed or communicated to the user. Both of these models are created in an application as JavaScript objects. Knockout bridges them by giving view models a way to represent a data model in a view (HTML) friendly way while establishing bidirectional communication between views and data models so that input affects application state, and application state affects how a view represents data.
Since HTML is the technology that represents data in a web browser, Knockout view models can either bind directly to preexisting HTML document elements or create new elements with HTML templates. Knockout can even create complete reusable HTML components (custom HTML tags with their own attributes and behaviors).
The example application included with this chapter, Omnom Recipes, displays recipe data (“data model”) in a browsable master/detail user interface. Both parts of this interface—the list of recipes and the details presented for each—are logical components situated ideally for Knockout view models. Each will have its own view model, and the application will coordinate the interactions between them. Eventually users will want to add or edit recipes, so additional HTML markup and view models will be introduced for that purpose.

Example application structure
The index.js file is responsible for launching a web server that will service requests for files in the public directory. When the application’s web page makes an AJAX request for recipe data, the web server will serialize the data in recipes.json and return it to the client.
In the public directory, the index.html file will be served up by default when a user visits http://localhost:8080. This file contains application markup augmented with Knockout attributes. The index.html file also references the app.css stylesheet in public/styles, the two vendor scripts in public/scripts/vendor, and the three application scripts in public/scripts.
A Knockout view model can be applied to an entire page or scoped to specific elements on a page. For nontrivial applications, it is advisable to use multiple view models to maintain modularity. In the Omnom Recipes application, the user interface exists as two logical “components”: a list of recipes and a detailed view of a selected recipe. Instead of using a monolithic view model for the entire page, the application divides Knockout logic into two JavaScript modules in public/scripts: recipe-list.js and recipe-details.js. The app.js module consumes both of these view models and coordinates their activities on the page.

Omnom Recipes screenshot
To avoid confusion the example application makes use of simple JavaScript closures instead of client-side frameworks or module-oriented build tools to organize modules. These closures often assign a single object to a property on the global window object that will be consumed by other scripts. For example, the recipe-list.js file creates a global object, window.RecipeList, to be used in the app.js file. While completely valid, this architectural decision should be viewed in light of the example application’s simplistic requirements.
The <header> element, which contains static HTML content that will not be manipulated by Knockout
The <nav id="recipe-list"> element, which contains an unordered list of recipes and will be manipulated by Knockout
The <section id="recipe-details"> element, which displays recipe information and will also be manipulated by Knockout
First, it is apparent that Knockout bindings are applied to HTML elements with the data-bind attribute. This is not the sole binding method but it is the most common. Both the <ul> element and the <li> element have bindings in the form binding-name: binding-value.
Second, multiple bindings may be applied to an element as a comma-delimited list, demonstrated by the <li> element, which has bindings for text, click, and css.
Third, bindings with more complex values, such as the css binding on the <li> element, use key/value hashes ({key: value, ... }) to define specific binding options.
Finally, binding values may refer to JavaScript primitives, view model properties, view model methods, or any valid JavaScript expression.
The recipe list Knockout bindings reveal certain things about the Knockout view model that will be bound to the <nav> element. Developers will immediately recognize the foreach flow control statement and correctly infer that recipes will be some collection exposed by the view model over which foreach will loop.
The <li> element within the unordered list has no HTML content of its own, so it may also be inferred that this element serves as a kind of template element that will be bound and rendered for each item in the recipes collection. As with most foreach loops, it is reasonable to expect the object within the loop (the loop’s “context”) to be an element of the collection. The list item’s text binding references the title property of the recipe object for the current iteration and will be injected as the text content of the <li> element when rendered.
The click and css bindings both reference the special $parent object, which tells Knockout that the binding values should target the view model bound with foreach and not the current recipe object. (The view model is the “parent” context and the recipe is its “child.”)
The click binding invokes the selectRecipe() method on the view model whenever the list item’s click event is triggered. It binds the method to the view model specifically, by passing the $parent reference to the method’s bind() function . This ensures that the value of this within the selectRecipe() method does not refer to the DOM element on which the handler is attached when it executes (the DOM’s default behavior).
In contrast, the isSelected() method on the $parent (view model) object is invoked by the css binding, but Knockout, not the DOM, manages the invocation, ensuring the value of this within the method refers to the view model and not a DOM element.
The css binding instructs Knockout to apply specific CSS classes to a DOM element whenever specific criteria are met. The css binding value is a hash of selector/function pairs that Knockout evaluates whenever the DOM element is rendered. If the isSelected() method returns true, the selected CSS class will be added to the list item element. Another special variable, $data, is passed to isSelected(). The $data variable always refers to the current object context in which Knockout is working, in this case an individual recipe object. Some Knockout bindings, like text, operate on the current object context by default; others, like foreach, cause a context switch as a side effect.
The view model object itself may be created in any manner a developer chooses. In the example code, each view model is a simple object literal created by a factory method. It is common to see the JavaScript constructor function pattern used to create view models in the wild, but view models are merely objects and may be constructed as a developer sees fit.
Other than the selectedRecipe property, the recipe list view model is wholly unremarkable. The template’s foreach binding is applied to the recipes property (an array of plain JavaScript objects), the click binding on each list item invokes the selectRecipe() method (passing it a specific recipe), and when each list item is rendered, the isSelected() method is called to determine if the recipe being evaluated has been assigned to the selectedRecipe property or not. Actually, that is not entirely correct. The value of selectedRecipe is not actually a recipe object, but a function—a Knockout observable.
An observable is a special kind of function that holds a value and can notify potential subscribers whenever that value changes. Bindings between HTML elements and observables automatically create subscriptions that Knockout manages in the background. Observables are created with special factory functions on the global ko object. The selectedRecipe observable in Listing 6-5 is created when ko.observable(recipes[0]) is called. Its initial value is the first element in the recipes array. When selectedRecipe() is invoked with no argument, it returns the value it contains (in this case, the object in recipes[0]). Any value passed to selectedRecipe() will become its new value. Although the selectedRecipe() property is not bound to any element in the recipe list template, it is manipulated when the user interacts with the recipe list via the view model’s methods. The changing value of this element will be used as input for the next page component: recipe details.
Some bindings, like the <h1> text binding , read a value from a view model property and inject its string value into the HTML element.
Because the paragraphs under the “Details” heading have static content (the text “Servings:” and “Approximate Cook Time:”), <span> tags are used to anchor the Knockout bindings for the servings and cookingTimes properties at the end of each paragraph.
The ingredients list iterates over a collection of strings with the foreach binding, so the context object within each loop is a string represented by the $data variable. Each string becomes the text content of a list item.
The <a> tag at the bottom links to the recipe’s web site of origin as a citation. If the recipe has no citation, the anchor will not be displayed. The element’s visible binding examines the view model’s hasCitation observable and, if the value is empty, hides the anchor element. Like the css binding used in the recipe list, the attr binding takes a key/value hash as its binding value. Hash keys (href and title) are the element attributes to be set on the anchor, and values are properties on the view model that will be bound to each attribute.
Listing 6-8 shows two new types of observables: ko.observableArray() and ko.computed().
Observable arrays monitor their values (normal JavaScript arrays) for additions, deletions, and index changes, so that if the array mutates, any subscriber to the observable array is notified. While the ingredients and instructions do not change in this example, code will be introduced later to manipulate the collections and show the observable array’s automatic binding updates in action.
Computed observables generate or compute a value based on other values exposed by observables on the view model. The ko.computed() function accepts callback that will be invoked to generate the value of the computed observable and optionally a context object that acts as the value of this within the callback. When referenced by a template binding, a computed observable’s value will be whatever its callback returns. The cookingTime property in Listing 6-8 creates a formatted string interpolated with the values from the hours and minutes observables. If either hours or minutes changes, the cookingTime computed observable will also update its subscribers.
Because hours and minutes are really functions (though they are treated as properties in Knockout binding expressions), each must be invoked in the body of the computed observable in order to retrieve its value.
A jQuery promise is created that will resolve at some point in the future, when the data obtained from the GET /recipes request becomes available.
The function passed to $() will be triggered when the DOM has been completely initialized to ensure that all Knockout template elements will be present before any binding attempts.
When the jQuery promise resolves, it passes the list of recipes to its resolution handler. If the promise fails, an alert is shown to the user indicating that a problem occurred.
Once the recipe data has been loaded, the list view model is created. The recipe array is passed as an argument to RecipeList.create(). The return value is the actual recipe list view model object.
The recipe details view model is created in a similar fashion. Its factory function accepts a single recipe, and so the selectedRecipe property on the recipe list is queried for a value. (The recipe list view model chooses the very first recipe in its data array for this value, by default.)
After the recipe details view model has been created, it subscribes to change notifications on the recipe list’s selectedRecipe observable. This is the manual equivalent of a DOM subscription created by Knockout when an observable is bound to an HTML element. The function provided to the subscribe() method will be invoked whenever selectedRecipe changes, receiving the new value as an argument. When the callback fires, the recipe details view model uses any newly selected recipe to update itself, thereby changing the values of its own observable properties.
Finally, view models are bound to the DOM when the global ko.applyBindings() function is invoked. In Listing 6-9 this function receives two arguments: the view model to be bound and the DOM element to which the view model will be bound. Any binding attribute Knockout encounters on this element or its descendants will be applied to the specified view model. If no DOM element is specified, Knockout assumes that the view model applies to the entire page. For simplistic pages this might be appropriate, but for more complex scenarios, using multiple view models that encapsulate their own data and behavior is the better option.
Knockout view model properties may be bound to form controls. Many controls, such as the <input> elements, share standard bindings like value; but others like <select> have element-specific bindings. For example, the options binding controls the creation of <option> elements within a <select> tag. In general, form field bindings behave much like bindings seen in example code up to this point, but complex forms can be tricky beasts and sometimes require more creative binding strategies.
The examples in this section build on the recipe details template and view model. Specifically, an “edit” mode is introduced whereby a user viewing a particular recipe can choose to alter its details through form fields. The same view model is used, but new form field elements have been added to the recipe details template, adding additional complexity to both.

In “view” mode, the Edit button is visible

In “edit” mode, the Save and Cancel buttons are visible
The Edit button switches the page from viewing mode to edit mode (and shows the appropriate form fields for each part of the recipe being viewed). While in edit mode, the Edit button itself is hidden, but two other buttons, Save and Cancel, become visible. If the user clicks the Save button, any changes made to the recipe will be persisted; in contrast, if the user clicks the Cancel button, the edit session will be aborted and the recipe details will revert to their original states.
Because the view model itself is assigned to a variable within the RecipeDetails.create() closure, its methods may reference it by name. By avoiding this altogether, event bindings are simplified and potential bugs are avoided.
Second, each button has a visible binding attached to the view model’s isEditing observable, but only the Edit button invokes the method directly as a function. It also possesses the only binding that uses a negation (!) operator, which turns the binding value into an expression. Any observable evaluated within an expression must be invoked as a function to retrieve its value. If an observable is itself used as the binding value, as is the case with visible bindings for the Save and Cancel buttons, it will be invoked automatically when Knockout evaluates the binding.
All three methods, edit(), save(), and cancelEdit(), manipulate the value of the isEditing observable, which determines which button or buttons are displayed on the form (and, as shall be demonstrated shortly, which form fields are displayed as well). Editing begins when the edit() method is called and ends when the user either saves the recipe or cancels the editing session.
To ensure that changes to the recipe are discarded when a user cancels the edit session, the view model serializes its state when the editing session begins in anticipation of possible reversion. If the editing session is canceled, the previous state is deserialized and the value of each observable property is effectively reset.
Knockout’s mapping plugin is distributed separately from the core Knockout library. The current version may be downloaded from http://knockoutjs.com/documentation/plugins-mapping.html . To install the plugin, simply add a <script> tag reference to the plugin script after the core Knockout <script> tag on an HTML page. It will automatically create the ko.mapping namespace property on the global ko object.
The plain JavaScript object literal that contains the data to be written to the view model’s observable properties
An object literal that maps properties on the plain JavaScript state object to observable properties on the view model (if this object is empty, it is assumed that the properties for both share the same names)
The view model that will receive the object literal’s data
The Knockout mapper plugin can serialize/deserialize view models as plain JavaScript object literals with its toJS() and fromJS() functions, or as JSON strings with its toJSON() and fromJSON() functions. These functions can be particularly useful for CRUD (create + read + update + delete) view models that bind JSON data to simple forms.
Although the Save button is present on the form, its method has only been stubbed in the view model. Its functionality will be added in a later example.

Editing the recipe title
New, element-specific Knockout bindings are declared for the <select> tag in Listing 6-14 to control the manner in which it uses view model data. The options binding tells Knockout which property on the view model holds the data set that will be used to create <option> elements within the tag. The binding value is the name of the property (in this case servingSizes), a plain array of read-only reference data.
The <select> tag’s value binding ties the selected value of the drop-down to an observable on the view model. When the <select> tag is rendered, this value will be automatically selected for the user in the DOM; when the user chooses a new value, the bound observable will be updated.
Finally, the optionsCaption binding creates a special <option> element in the DOM that appears at the top of the drop-down options list, but will never be set as the selected value on the view model. It is a mere cosmetic enhancement that gives some instruction to the user about how the drop-down is to be used.

Servings drop-down with a preselected value

Choosing a new value from the Servings drop-down

Creating and editing recipe ingredients
The commitNewIngredient() method evaluates the content of the newIngredient observable to determine if it is empty or not. If it is, the user has entered no text into the <input>, and so the method returns prematurely. If not, the value of newIngredient is pushed into the ingredients observable array and the newIngredient observable is cleared.
Observable arrays share a nearly identical API with normal JavaScript arrays. Most array operations, such as push(), pop(), slice(), splice(), and so on, are available on observable arrays and will trigger update notifications to the observable array’s subscribers when called.
For each ingredient in the ingredients observable array, an input is rendered above the new ingredient field. These inputs are nested within an unordered list, and their values are all bound to specific ingredients in the array, denoted by the $data variable within the foreach loop. The attr binding is used to give a name to each <input> element by concatenating the string “ingredient-” with the current index of the loop, exposed by the special $index observable. Like any observable used in a binding expression, $index must be invoked to retrieve its value.
It cannot be emphasized enough that the bindings exposed by observable arrays apply only to the arrays themselves and not to the elements they contain. When each ingredient is bound to a DOM <input> element, it is wrapped in the $data observable, but there is no communication between this observable and the containing observable array. If the value within $data changes because of input, the array will be oblivious and still contain its own copy of the unchanged data. This is a source of consternation, but there are several coping strategies that make it bearable.
First, the observable ingredients array could be filled with objects that each expose the ingredient text as an observable property (something like { ingredient: ko.observable('20 mushrooms') }). The value binding of each <input> would then use each object’s $data.ingredient property to establish a two-way binding. The observable array still remains ignorant of changes to its members, but because each element is an object that tracks its own data through an observable, this becomes a moot point.
The second approach, taken in Listing 6-19, is to listen for change events on each <input> element through the valueUpdate and event bindings and then tell the view model to replace specific ingredient values in the ingredients observable array as they change. Neither way is “right”—both merely have their own advantages and disadvantages.
The valueUpdate binding first instructs Knockout to change the value of $data each time the DOM input event fires on each <input> element. (Remember: Knockout normally updates $data once an element loses focus, not when it receives input.) Second, a Knockout event binding is added that invokes the changeIngredient() method on the view model every time the DOM input event fires as well. By default Knockout submits the current value of $data to changeIngredient(), but since the new value will replace the old, the view model must know which index in the ingredients array is being targeted. Using bind(), the value of $index is bound to the method as the first argument ensuring that the value of $data will be the second.

Creating and editing recipe instructions
Like the minus button, both up and down buttons use Knockout click bindings to invoke methods on the view model, passing the associated item index as an argument to each.

Updating a recipe’s citation
With inspiration from the popular webcomponents.js polyfill ( http://webcomponents.org ), Knockout provides a custom component system that produces reusable HTML elements with custom tag names, markup, and behavior.
A factory function that creates a view model for each instance of the custom component on a page
An HTML template with its own Knockout bindings that will be injected wherever the component is used
A custom tag registration that tells Knockout where to find the template and how to instantiate its view model when it encounters component tags on a page
The recipe details view model already possesses the properties and methods used to manipulate its ingredients and instructions arrays, but it is necessary to abstract this code and move it into its own module, input-list.js, so that Knockout can use it exclusively for the new input list component.
Listing 6-27 shows an abbreviated version of the input list module. It is structured in the same manner as the other view model factory modules, exposing a create() method on the global InputList object. This factory method accepts a params parameter that will be used to pass the input list component a reference to an observable array (params.items) and a host of optional settings that will determine how the input list will behave when bound to the rendered template: params.isOrdered, params.enableAdd, params.enableUpdate, and params.enableRemove.
The params.items and params.isOrdered properties correspond to the binding attributes in Listing 6-26. When a component is used on a page, the values of its binding attributes are passed, by reference, to the component’s view model via the params object. In this scenario, input list components will be given access to the ingredients and instructions observable arrays on the recipe details view model.
Input list methods have been redacted in Listing 6-27 because they are nearly identical to their counterparts in Listing 6-24. Instead of referencing ingredients or instructions, however, these methods reference the abstracted items observable array. The component populates this array with data it receives from params.items. The newItem observable holds the value of the new item input, in exactly the same manner as the newIngredient and newInstruction observables behaved in the recipe-details.js module. It is not shared with the recipe details view model, however, as it only has relevance within the input list.
Since the input list component will now handle the manipulation of the Ingredients and Instructions lists on the page, the properties and methods in the recipe details view model that previously performed these manipulations have been removed.
A reusable component needs an abstracted, reusable template, so the markup associated with editing instructions and ingredients has also been collected into a single HTML template. Each time an instance of the input list component is created on the page, Knockout will inject the template into the DOM, then bind a new instance of the input list view model to it.
Since the input list component can accommodate both ordered and unordered lists, the template must use Knockout bindings to intelligently decide which kind of list to display. Only ordered lists will have promotion and demotion buttons, while items can be added and removed from both kinds of lists. Since the input list view model exposes boolean properties it receives from its params object, the template can alter its behavior based on the values of those properties. For example, if the view model property isOrdered is true, the template will show an ordered list; otherwise it will show an unordered list. Likewise the fields and buttons associated with adding new items or removing existing items are toggled by the enableAdd and enableRemove properties, respectively.
There is a lot of markup to digest in the input list template, but it is really just the combination of both the unordered Ingredients list and the ordered Instructions list, with a shared new item field.
Special binding comments—the ko if and ko ifnot comment blocks—wrap portions of the template to determine if the elements within the comment blocks should be added to the page. These comment blocks evaluate properties on the view model and alter the template processing control flow accordingly. This differs from the visible element bindings, which merely hide elements that already exist in the DOM.
The syntax used within ko comment block bindings is known as containerless control flow syntax.
All fields and buttons in the input list template are bound to properties and methods on the input list view model. If a demote button is clicked, for example, the input list view model will manipulate its internal items collection, which is really a reference to the instructions observable array in the recipe details view model, shared via the items binding. The template determines which type of list to display based on the isOrdered property, while the add and remove controls are toggled based on the enableAdd and enableRemove properties. Because these properties are read from the params object in the view model, any of them may be added to the <input-list> component tag as a binding attribute. In this way the component abstracts and encapsulates all operations made against any collection that can be represented as a list of inputs.
Once a component view model and template have been defined, the component itself must be registered with Knockout. This tells Knockout how to resolve component instances when it encounters the component’s custom tag in the DOM and also what template and view model to use when rendering the component’s contents.
In Listing 6-29, the ko.components.register() function receives two arguments: the name of the new component’s custom tag, input-list, and an options hash that provides Knockout with the information it needs to construct the component.
Knockout uses the custom tag name to identify the <input-list> element in the DOM and replace it with the template content specified in the options hash.
Since markup for the input list element has been defined in a <template> element, the Knockout component system only needs to know what element ID it should use to find that element in the DOM. The template object in the options hash contains this ID in its element property. For smaller components, the entire HTML template could be assigned, as a string, to the template property directly.
To construct a view model for the component, a factory function is assigned to the viewModel property of the options hash. This property can also reference a regular constructor function, but using factory functions sidesteps potential problems that arise when event bindings reassign the this keyword within view models. Regardless of approach, the view model function will receive a params object populated with values from the template’s binding declarations.
Knockout can load component templates and view model functions via RequireJS automatically. Consult the Knockout component documentation for more details. The RequireJS module loader is covered in Chapter 5.
Not only are the complexities of the input list obscured behind the new <input-list> tag, but aspects of the list, such as the ability to add and remove items, are controlled through bound attributes. This promotes both flexibility and maintainability as common behaviors are bundled into a single element.
At this point the recipe details view model manipulates the recipe data but does nothing to persist changes. It also fails to communicate recipe changes to the recipe list, so even if a user modifies a recipe’s title, the recipe list continues to display the recipe’s original title. From a use case perspective, the recipe list should only be updated if the recipe details are sent to the server and successfully persisted. A more sophisticated mechanism is needed to facilitate this workflow.
Knockout observables implement the behavior of a Knockout subscribable, a more abstract object that does not hold a value but acts as a kind of eventing mechanism to which other objects may subscribe. Observables take advantage of the subscribable interface by publishing their own changes through subscribables, to which DOM bindings (and perhaps even other view models) listen.
To effectively publish an updated recipe to the subscribable, the recipe details view model has been modified in several ways.
First, the subscribable is passed to the recipe details factory function as an argument named bus (shorthand for “poor developer’s message bus”). The recipe details module will use this subscribable to raise events when recipe details change.
Second, the view model now tracks the recipe’s ID since this value will be used to update recipe data on the server. The recipe list will also use the ID to replace stale recipe data after changes have been saved.
The callback function to be executed when the specified event is triggered on the subscribable
The context object that will be bound to the this keyword within the callback function (or null, if the this keyword is never used within the callback)
The name of the event to which the callback is subscribed (e.g., recipe.saved)
Though subscribables aren’t the only way to raise events in an application, they can be effective for straightforward use cases, creating a decoupled communication chain between modules.
Many front-end frameworks offer suites of compelling features and plugins, but Knockout really focuses on the interaction between the HTML view and data model in an application. Knockout’s observables alleviate the pain of manually pulling data from, and pushing data to, HTML DOM elements. Developers can add data-bind attributes to any element on a page, gluing the markup to one or more view models through two-way bindings.
While form data can be directly bound to view model properties, DOM event bindings can also invoke methods on Knockout view models as well. Any changes these methods make to view model observable properties are immediately reflected in the DOM. Bindings like visible and css determine how an element is displayed to the user, while bindings like text and value determine an element’s content.
Observables are special objects that hold view model data values. When their values change, observables notify any interested subscribers, including bound DOM elements. Primitive observables hold single values, while observable arrays hold collections. Mutations that happen on observable arrays can be tracked and mirrored by HTML elements that are bound to the collection. The foreach binding is especially useful when iterating over an observable array’s elements, though special considerations must be taken if individual members of an observable array are changed or replaced.
Knockout templates and view models can be abstracted into reusable components with unique HTML tags. These components can be added to a page and bound to other view model properties, just as any standard HTML elements would be bound. Encapsulating state and behavior in a component reduces the total markup on a page and also guarantees that similar portions of an application (e.g., a list of inputs bound to a collection) behave the same wherever used.
Finally, subscribable objects—the basic building blocks behind observables—can be used as primitive message busses, notifying subscribers of published events and potentially delivering payloads of data where needed.
Knockout web site: http://knockoutjs.com/
In this chapter, we will be turning our attention toward a very popular framework, that is, Angular. It is one of the world’s most popular frameworks and is a leading name in the field of JavaScript and web development. In fact, if you have had even a basic amount of introduction to web development, you might have already heard of Angular as well as AngularJS.
Basically, Angular is a front-end web application development framework that is maintained and developed by the Angular Team at Google. Yes, it is backed by the likes of Google and has a very thriving and active community around the world.
In this chapter, we will be learning about installation, setup, Dependency Injection, as well as how to get the most out of this web application framework. But before going any further, we need to understand one key difference.
Did you notice we mentioned Angular and AngularJS separately? Yes, both are two different frameworks built atop two different, albeit rather closely related, platforms. Therefore, it is a good idea to first familiarize ourselves with the basic differences between the two. Web developers of all levels of ability are interested in learning Angular, especially those working on web apps.
If we are to speak purely of JavaScript frameworks when comparing Angular with AngularJS, the latter is the answer. This is because Angular is written in TypeScript, which happens to be a superscript of JavaScript.

Angular is a popular web framework that is used to build progressive web apps
In September 2016, AngularJS 2.0 was released, and this is where the stark difference began. The name was now just “Angular,” (see Figure 7-1) to reflect the steering toward TypeScript as opposed to JavaScript. Angular in itself is a complete rewrite of AngularJS.
Going forward, all future releases of Angular began to be named just “Angular,” with Angular 7 being the latest one. Angular 8, however, is likely to be released soon.
AngularJS is the original release, and it can also be called Angular 1.0. This particular version of the framework is based on pure JavaScript.
Angular 2.0 and its subsequent versions are based on TypeScript and do not follow the same nomenclature as version 1.0 (no “JS”).
But this is not where the differences end.
Support for object-oriented programming
Static types
Lambdas and iterators
For loops
Dynamic loading
A custom set of UI components
Python-style generators
Much of the preceding features are made possible due to the fact that Angular is based on TypeScript. Having said that, whatever happened to AngularJS? Well, it is still under active development, albeit under Long Term Support mode (means it receives only vital and essential updates). The reason is simple—a good number of agencies, developers, and organizations have long relied on the popular JavaScript framework that is AngularJS. Migrating away from AngularJS entirely toward Angular requires a good deal of time and efforts and possibly invites code and compatibility issues.
As such, both AngularJS and Angular continue to be under development, and each has its own share of community and user base. Considering the fact that Angular is the newer variant and comes loaded with additional features, it is only natural that more and more new developers are keen on learning Angular as opposed to AngularJS. In this chapter, as a result, we will be focusing on Angular.
However, it should also be pointed out that since AngularJS is still being used in the industry in a large number of enterprise-level projects, it is far from obsolete, and at times, many developers choose to learn both the frameworks in order to improve their job prospects. Nevertheless, even the AngularJS web site has a call to action button that takes visitors to Angular—the new version is the future of this framework.
You can learn more about AngularJS here: https://angularjs.org/
Now that we have learned what the major differences are between AngularJS and Angular, we can safely focus on getting things rolling with Angular development.
The first step, obviously, is to install Angular on our development environment.
Angular requires Node.js version 8.x or higher to function. This means if our system does not already have the latest version of Node.js, we need to first install it.

Node.js supports multiple operating systems and can be installed on Windows, Mac, as well as Linux
Node.js comes with multiple installers, each suited to a particular family of operating systems. For Windows users, for instance, the installer is a simple executable file. Similarly, there are relevant versions available for Linux and Mac users as well. All of these versions as well as the older and other releases of Node.js are available on the download page.1
Learn more about Node.js here: https://nodejs.org/en/
Once we have Node.js installed and set up on our system, we are ready to begin Angular installation. It is noteworthy that npm, the Node Package Manager, will automatically be installed when we install Node.js
As such, we can simply run the relevant npm command to do the needful. As first step it is recommended to install the Angular CLI, which will enable us to create projects and generate and execute apps in Angular right from the command line. Angular CLI can perform a multitude of tasks related to testing, building, and deployment of Angular apps.

Installing Angular CLI using npm
It is very important to ensure that Node.js and npm versions are latest. As can be seen in the preceding example, npm version is less than the recommended one, and as such, the engine throws a warning.
Now that we have installed Angular CLI, we can start by creating an Angular workspace.
In Angular, a folder that contains projects (i.e., apps and libraries). The CLI ng new command creates a workspace to contain projects. Commands that create or operate on apps and libraries (such as add and generate) must be executed from within a workspace folder.
In simpler words, all applications in Angular are made up of files. Now, these files are contained within a given project, thereby implying that a project will contain files that are related to a particular app or library.
Now, a workspace is an entity that contains files for one or more project. As such, we first need to create a workspace, and then build our app, and then modify or tweak or code its files to suit our purpose. Furthermore, in Angular, a “project” refers to a collection of set files and libraries that are related to a specific purpose or app.

Creating our first app in Angular
Following that, Angular will install the required dependencies and project files.
Once our workspace and project have been set up, we can navigate to the concerned directory. We will find that the workspace has a root folder named after our app, that is, my-first-app which contains the files and data related to the project.
The src subdirectory too will have a my-first-app directory, but this is where our skeleton app project resides. The end-to-end test files will be in the e2e folder.

Directory structure of an Angular project by default
The app project that we just built contains a sample welcome app. It might be a good idea to try running it locally first.

Launching our Angular app using ng serve command
The --open append will automatically launch the app. If we so desire, we can omit it and then manually navigate to localhost:4200 in a web browser.

Angular application running at localhost:4200—any changes will automatically be reflected upon save
When we built our first Angular app using the CLI, we also created our first Angular component.
An Angular component class is responsible for exposing data and handling most of the view's display and user-interaction logic through data binding.
In other words, Angular components are what we use to display data on screen, seek input from the user, and so on.
In general, most Angular apps have a root component named app-root. In our sample app, it is the file named app.component.ts in the directory /src/app.

Contents of the App components file

The server automatically updates the output as per the changes

The new title is shown in the browser
Furthermore, we can see that the app.component.css file is responsible for CSS styles for our app.

Adding CSS to modify the app heading appearance

Recompiling to reflect the latest saved changes

CSS changes reflected in output of Angular app
At this point, we have learned how to build and serve a basic app in Angular. Obviously, this is not all that Angular can do. However, from here, we can focus on more complex app development and dig deeper to see if Angular suits our workflow needs.
One of the key features of Angular, especially its latest versions, is the fact that it has its own Dependency Injection framework. As an application design pattern, Dependency Injection in Angular is used by several apps to implement a modular design workflow.
In web development, we define “dependencies” as simple entities (such as classes, objects, or actions) that a class needs in order to perform its role. In other words, if all the dependencies are not met, the class cannot function properly.
In Angular, Dependency Injection framework ensures that all the dependencies are available when the class is first initiated. This makes it easy to develop flexible and faster apps, as we do not really need to create bulky and bloated code.
In the preceding code, we are calling the falsPoster() method .
Now, we can make use of our funcDemo() method and use the DI framework within the injected service.
In this chapter, we familiarized ourselves with Angular at a basic level. We learned what this particular TypeScript framework is, how it differs from its other variant, and how to get started with Angular. Furthermore, we also learned how to install Angular and build as well as serve a sample app. Finally, we looked at Dependency Injection, one of the key features of Angular.
At this point, the next step should ideally be to turn toward more complex projects and delve deeper into Angular. It is, however, noteworthy that Angular, as a framework, has a very large community and user base. In fact, it is one of the most popular web frameworks when it comes to building web apps. Naturally, the job market as well as industry scope is stellar as well.
Angular Homepage: https://angular.io/
Angular Documentation: https://angular.io/docs
Progressive Web Apps with Angular: www.apress.com/in/book/9781484244470
Pro Angular 6: www.apress.com/in/book/9781484236482
An organization’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage.
—Jack Welch
As development platforms go, Node is no longer the new kid on the block. But as many well-known and respected organizations will attest, the benefits afforded by JavaScript as a server-side language have already had a tremendous impact on the manner in which they develop and deploy software. Among the many accolades for Node, Michael Yormark, Project Manager at Dow Jones, has proclaimed “The simple truth is Node has reinvented the way we create websites. Developers build critical functionality in days, not weeks.” ( www.joyent.com/blog/the-node-firm-and-joyent-offer-node-js-training )
We especially liked the ubiquity of Express, but found it didn’t scale well in multiple development teams. Express is non-prescriptive and allows you to set up a server in whatever way you see fit. This is great for flexibility, but bad for consistency in large teams… Over time we saw patterns emerge as more teams picked up node.js and turned those into Kraken.js; it’s not a framework in itself, but a convention layer on top of express that allows it to scale to larger development organizations. We wanted our engineers to focus on building their applications and not just focus on setting up their environments.
Environment-aware configuration
Configuration-based middleware registration
Structured route registration
The Dust template engine
Internationalization and localization
Enhanced security techniques
Kraken builds on the already firm foundation of Express, the minimalist web framework for Node whose API has become the de facto standard for frameworks in this category. As a result, this chapter assumes the reader already has a basic, working familiarity with Express. Portions of this chapter also discuss concepts covered in this book’s chapters on Grunt, Yeoman, and Knex/Bookshelf. If you are unfamiliar with these subjects, you may wish to read those chapters before you continue.

Application that requires unique settings based on its environment
As the application in Figure 8-1 progresses through each environment, the settings that tell it how to connect to the various external services on which it relies must change accordingly. Kraken’s confit library provides developers with a standard convention for accomplishing this goal by offering a simple, environment-aware configuration layer for Node applications.
Confit operates by loading a default JSON configuration file (typically named config.json). Confit then attempts to load an additional configuration file based on the value of the NODE_ENV environment variable. If an environment-specific configuration file is found, any settings it specifies are recursively merged with those defined within the default configuration.
Before continuing, notice that our project’s default configuration file provides connection settings for an e-mail server under the email property, while neither of the project’s environment-specific configuration files provides such information. In contrast, the default configuration provides connection settings for a Redis cache server under the nested cache:redis property, while both of the environment-specific configurations provide overriding information for this property.
Notice also that the default configuration file includes a comment above the email property. Comments, which are not part of the JSON specification, would normally result in an error being thrown if we attempted to use Node’s require() method to parse the contents of this file. Confit, however, will strip out such comments before attempting to parse the file, allowing us to embed comments within our configuration as needed.
In Listing 8-3, $ export NODE_ENV=development is run from the terminal to set the value of the NODE_ENV environment variable. This command applies only to Unix and Unix-like systems (including OS X). Windows users will instead need to run $ set NODE_ENV=development. It’s also important to remember that if the NODE_ENV environment variable is not set, confit will assume the application is running in the development environment.
As you can see in Listing 8-3, confit compiled our project’s configuration object by merging the contents of the config/development.json environment configuration file with the default config/config.json file, giving priority to any settings specified in development.json. As a result, our configuration object inherited the email settings that only exist in config.json, along with the cache and database settings defined within the configuration file for the development environment. In Listing 8-1, these settings are accessed through the use of the configuration object’s get() method .
In addition to accessing top-level configuration settings (e.g., database, as shown in Listing 8-1), our configuration object’s get() method can also be used to access deeply nested configuration settings using : as a delimiter. For example, we could have referenced the project’s postgresql settings directly with config.get('database:postgresql').
While confit itself only includes support for the two shortstop handlers that we’ve just covered (import and config), several additional handlers that are quite useful can be found in the shortstop-handlers module. Let’s take a look at four examples.
The main script (index.js) from this chapter’s confit-shortstop-extras project is shown in Listing 8-7. This script largely mirrors the one we’ve already seen in Listing 8-1, with a few minor differences. In this example, additional handlers are imported from the shortstop-handlers module. Also, instead of instantiating confit by passing the path to our project’s config folder (basedir), we pass an object of options. Within this object, we continue to specify a value for basedir, but we also pass a protocols object, providing confit with references to the additional shortstop handlers we’d like to use.
file : Sets a property using the contents of a specified file
require : Sets a property using the exported value of a Node module (particularly useful for dynamic values that can only be determined at runtime)
glob : Sets a property to an array containing files whose names match a specified pattern
path : Sets a property to the absolute path of a referenced file

Series of Express middleware calls
Modify the incoming request object
Modify the outgoing response object
Execute additional code
Close the request-response cycle
Call the next middleware function in the series
The morgan module logs the request to the console.
The cookie-parser module parses data from the request’s Cookie header and assigns it to the request object’s cookies property.
The ratelimit-middleware module rate-limits clients that attempt to access the application too frequently.
Finally, the appropriate route handler is called.
This approach provides developers with a considerable degree of flexibility, allowing them to execute their own logic at any point during the request-response cycle. It also allows Express to maintain a relatively small footprint by delegating responsibility for performing nonessential tasks to third-party middleware modules. But as flexible as this approach is, it can also prove troublesome to manage as applications and the teams that develop them grow in size and complexity.
With the help of Kraken’s meddleware module , all aspects of third-party middleware management within this application have been moved from code to standardized configuration files. The result is an application that is not only more organized but also easier to understand and modify.
In the previous section, you learned how Kraken’s meddleware module can simplify middleware function registration by moving the logic required for loading and configuring those functions into standardized JSON configuration files. In much the same way, Kraken’s enrouten module applies the same concept to bring structure where there often is none to be found—Express routes.
Simple Express applications with a small number of routes can often make do with a single module in which every available route is defined. However, as applications gradually grow in depth and complexity, such an organizational structure (or lack thereof) can quickly become unwieldy. Enrouten solves this problem by providing three approaches with which Express routes can be defined in a consistent, structured fashion.
Using enrouten’s index configuration option , the path to a single module can be specified. This module will then be loaded and passed an Express Router instance that has been mounted to the root path. This option provides developers with the simplest method for defining routes, as it does not enforce any specific type of organizational structure. While this option provides a good starting point for new applications, care must be taken not to abuse it. This option is often used in combination with enrouten’s directory and routes configuration options, which we will cover shortly.

Structure of this project’s /routes folder
Enrouten’s directory configuration option provides an approach that favors “convention over configuration” by automatically determining the structure of an application’s API based on the layout of a specified folder. This approach provides a quick and easy method for structuring Express routes in an organized and consistent way. However, complex applications may eventually come to find this approach to be rather confining.
Applications with APIs that feature a number of complex, deeply nested routes will likely find greater benefit from enrouten’s routes configuration option, which allows developers to create completely separate modules for each of the application’s routes. API endpoints, methods, handlers, and route-specific middleware are then specified within configuration files—an organized approach that allows for the greatest degree of flexibility, at the expense of being slightly more verbose.
Many popular JavaScript template engines (e.g., Mustache and Handlebars) tout themselves as being “logic-less”—an attribute that describes their ability to help developers maintain a clear separation of concerns between an application’s business logic and its presentation layer. When properly maintained, this separation makes it possible for significant changes to occur within the interface that users are presented with while requiring minimal (if any) accompanying changes behind the scenes (and vice versa).
Logic-less template engines attempt to prevent developers from creating spaghetti code by banning the use of logic within an application’s views. Such templates are typically capable of referencing values within a provided payload of information, iterating through arrays, and toggling specific portions of their content on and off based on simple boolean logic.
Unfortunately, this rather heavy-handed approach often brings about the very problems it hoped to prevent, albeit in an unexpected way. Although logic-less template engines such as Handlebars prevent the use of logic within templates themselves, they do not negate the need for that logic to exist in the first place. The logic required for preparing data for template use must exist somewhere, and more often than not, the use of logic-less template engines results in presentation-related logic spilling over into the business layer.
Dust, which is the JavaScript template engine favored by Kraken, seeks to solve this problem by taking an approach that is better thought of as “less-logic” rather than strictly “logic-less.” By allowing developers to embed slightly more advanced logic within their templates in the form of “helpers,” Dust allows presentation logic to remain where it belongs, in the presentation layer, rather than the business layer.
As Dust goes about its rendering process, it fetches referenced data by applying one or more “contexts” to the template in question. The simplest templates have a single context that references the outermost level of the JSON object that was passed. For example, consider the template shown in Listing 8-21, in which two references are used, {report_name} and {misc.total_population}. Dust processes these references by searching for matching properties (starting at the outermost level) within the object shown in Listing 8-20.
When applying conditionality within a template, it is important to understand the rules that Dust will apply as it determines the “truthiness” of a property. Empty strings, boolean false, empty arrays, null, and undefined are all considered to be false. The number 0, empty objects, and string-based representations for “0,” “null,” “undefined,” and “false” are all considered to be true.
Consider a commonly encountered scenario in which a complex web application consisting of multiple pages is created. Each of these pages displays a unique set of content while at the same time sharing common elements, such as headers and footers, with the other pages. With the help of Dust blocks, developers can define these shared elements in a single location. Afterward, templates that wish to inherit from them can, while also retaining the ability to overwrite their content when necessary.
Filter | Description |
|---|---|
s | Disables HTML escaping |
h | Forces HTML escaping |
j | Forces JavaScript escaping |
u | Encodes with encodeURI() |
uc | Encodes with encodeURIComponent() |
js | Stringifies a JSON literal |
jp | Parses a JSON string |
In addition to storing data, Dust contexts are also capable of storing functions (referred to as “context helpers”), the output of which can later be referenced by the templates to which they are passed. In this way, a Dust context can be thought of as more than a simple payload of raw information, but rather as a view model, a mediator between an application’s business logic and its views, capable of formatting information in the most appropriate manner along the way.
As shown in this example, every Dust context helper receives four arguments: chunk, context, bodies, and params. Let’s take a look at a few examples that demonstrate their usage.
Imagine a scenario in which large portions of a template’s content are determined by one or more context helpers. Instead of forcing developers to concatenate strings in an unwieldy fashion, Dust allows such content to remain where it belongs—in the template—available as options from which a context helper can choose to render.
Helper functions provide Dust with much of its power and flexibility. They allow a context object to serve as a view model—an intelligent bridge between an application’s business logic and its user interface, capable of fetching information and formatting it appropriately for a specific use case before passing it along to one or more views for rendering. But as useful as this is, we’ve really only begun to scratch the surface in terms of how these helper functions can be applied to powerful effect.
Depending on the complexity of the template in question, the impact this approach can have on user experience can often be dramatic. Rather than forcing users to wait for an entire page to load before they can proceed, this approach allows us to push content down to the client as it becomes available. As a result, the delay that users perceive when accessing an application can often be reduced significantly.
In the previous section, we explored how context objects can be extended to include logic that is relevant to a specific view through the use of context helpers. In a similar manner, Dust allows helper functions to be defined at a global level, making them available to all templates without being explicitly defined within their contexts. Dust comes packaged with a number of such helpers. By taking advantage of them, developers can more easily solve many of the challenges that are often encountered when working with stricter, logic-less template solutions.
Logic Helper | Description |
|---|---|
@eq | Strictly equal to |
@ne | Not strictly equal to |
@gt | Greater than |
@lt | Less than |
@gte | Greater than or equal to |
@lte | Less than or equal to |
Iteration Helper | Description |
|---|---|
@sep | Renders content for every iteration, except the last |
@first | Renders content only for the first iteration |
@last | Renders content only for the last iteration |
The various “methods” supported by Dust’s @math helper include add, subtract, multiply, divide, mod, abs, floor, and ceil.
In this example’s template, a loop is created in which we iterate through each person defined within the context. For each person, a message is displayed if they happen to fall within the 20-something age bracket. First, this message is displayed using a combination of preexisting logic helpers, @gte and @lt. Next, the message is displayed again, using a custom @inRange helper that has been defined at the global level.
Now that you are familiar with many of the fundamental components that Kraken relies on, let’s move forward with creating our first real Kraken application.

Creating a Kraken application using the Yeoman generator

Initial file structure for the app project

Viewing the project in the browser for the first time
The kraken-js module , which we see here, is nothing more than a standard Express middleware library. However, instead of simply augmenting Express with some small bit of additional functionality, Kraken takes responsibility for configuring a complete Express application. It will do so with the help of many other modules, including those that have already been covered in this chapter: confit, meddleware, enrouten, and adaro.
As shown in Listing 8-48, Kraken is passed a configuration object containing an onconfig() callback function, which will be called after Kraken has taken care of initializing confit for us. Here we can provide any last-minute overrides that we may not want to define directly within the project’s JSON configuration files. In this example, no such overrides are made.
In addition to creating a default controller for our project, Kraken has also taken care of creating a corresponding model, IndexModel, which you can see referenced in Listing 8-49. We will discuss Kraken’s relationship with models shortly, but first, let’s walk through the process of creating a new controller of our own.
controllers/feeds.js: Controller
models/feeds.js: Model
test/feeds.js: Test suite
public/templates/feeds.dust: Dust template
locales/US/en/feeds.properties: Internationalization settings
For the moment, let’s place our focus on the first three files listed here, starting with the model. We’ll take a look at the accompanying Dust template and internalization settings file in the next section.
Unlike many other “full-stack” frameworks that attempt to provide developers with tools that address every conceivable need (including data persistence), Kraken takes a minimalistic approach that does not attempt to reinvent the wheel. This approach recognizes that developers already have access to a wide variety of well-supported libraries for managing data persistence, two of which are covered by this book: Knex/Bookshelf and Mongoose.
The updated model shown in Listing 8-51 assumes that you are already familiar with the Knex and Bookshelf libraries, along with the steps necessary to configure them. If that is not the case, you may want to read Chapter 10. Regardless, this chapter’s app project provides a fully functioning demonstration of the code shown here.
List feeds
Fetch information regarding a specific feed
Fetch articles from a specific feed
In the next section, we will take a look at the test suite that Kraken has created for this portion of our application. With this test suite, we can verify that the routes we have defined work as expected.
The server should respond with an HTTP status code of 200.
The server should respond with a Content-Type header containing the string html.
The body of the response should contain the string "name": "index".
The server should respond with an HTTP status code of 200.
The server should respond with a Content-Type header containing the string json.
The server should return one or more results in the form of an array.
Recall that our application’s Feed model was created with the help of the Knex and Bookshelf libraries. The data that you see referenced in this project originates from a Knex “seed” file (seeds/developments/00-feeds.js) with which we can populate our database with sample data. At any point, this project’s SQLite database can be reset to its initial state by running $ grunt reset-db from the command line. If these concepts are unfamiliar to you, you may want to read Chapter 10.

Running the test suite
Kraken provides built-in support for creating applications that are capable of adapting themselves to meet the unique needs of multiple languages and regions, an important requirement for most products that hope to see widespread use across multiple, diverse markets. In this section we’ll take a look at the two steps by which this is accomplished, internationalization and localization, and how they can be applied within the context of a Kraken application whose templates are generated on the server.
Take note of the location of this example’s template, public/templates/index.dust, and the location of its corresponding content property files, locales/US/en/index.properties and locales/ES/es/index.properties. Kraken is configured to pair Dust templates with content property files such as these on a one-to-one basis, by matching them based on their paths and file names.

English version of the application’s home page

Spanish version of the application’s home page
The example shown in Listing 8-58 demonstrates the process by which specific regional settings can be manually assigned to an incoming request. What it does not demonstrate, however, is the process by which a user’s desired localization settings can be automatically detected.
While helpful, the accept-language HTTP request header does not always reflect the desired localization settings of the user making the request. Always be sure to provide users with a method for manually specifying such settings on their own (e.g., as part of a “Settings” page).
Given Kraken’s origins at PayPal, a worldwide online payments processor, it should come as no surprise that the framework focuses heavily on security. Kraken does so with the help of Lusca, a library that extends Express with a number of enhanced security techniques, as suggested by the Open Web Application Security Project (OWASP). These extensions are provided in the form of multiple, independently configurable middleware modules. In this section, we will briefly examine two ways in which Kraken can help secure Express against commonly encountered attacks.
This material should by no means be considered exhaustive. It is merely intended to serve as a starting point for implementing security within the context of a Kraken/Express application. Readers with a hand in implementing security on the Web are highly encouraged to delve further into this topic by reading a few of the many great books that are devoted entirely to this subject.

Cookie-based authentication
In a typical scenario, a user will submit their credentials to a web application, which will then compare them with those it has on file. Assuming the credentials are valid, the server will then create a new session—essentially, a record representing the user’s successful sign-in attempt. A unique identifier belonging to this session is then transmitted to the user in the form of a cookie, which is automatically stored by the user’s browser. Subsequent requests to the application made by the browser will automatically attach the information stored in this cookie, allowing the application to look up the matching session record. As a result, the application has the ability to verify the user’s identity without requiring the user to resubmit their username and password along with every request.

Signing into a trusted application

Successful sign-in attempt

Malicious web site attempting to convince the user to click a button

Successful CSRF attack
Several different steps can be taken to defend against attacks of this nature. The method by which Kraken defends against them is referred to as the “synchronizer token pattern.” In this approach, a random string is generated for each incoming request, which the client can subsequently access as part of a template’s context or via a response header. Importantly, this string is not stored as a cookie. The next POST, PUT, PATCH, or DELETE request made by the client must include this string, which the server will then compare with the one it previously generated. The request will only be allowed to proceed if a match is made.

Sign-in page for this chapter’s app project

Kraken’s “CSRF token missing” error
Here we create a hidden input with the name _csrf, the value for which Lusca has automatically passed to our template’s context under a property with the same name. The value that we see rendered in this example, OERRGi9AGNPEYnNWj8skkfL9f0JIWJp3uKK8g=, is a random hash that Lusca has generated for us (i.e., the “synchronizer token”). When we submit this form, Lusca will verify that this value matches the one it previously gave us. If they match, the request is allowed to proceed. Otherwise, an error is thrown. This approach allows applications to defend against CSRF attacks by requiring additional, identifying information that is not stored as part of a cookie, making it much more difficult for attackers to trick users into performing unintended actions.
Lusca provides developers with a convenient mechanism for configuring an application’s Content Security Policy (CSP ). These rules provide instructions to supporting browsers regarding the locations from which various resources (e.g., scripts, stylesheets, images, etc.) can be loaded. When defined, these rules are conveyed to browsers in the form of the Content-Security-Policy response header.
For a full list of the various options that can be configured via the Content-Security-Policy header, visit the Open Web Application Security Project (OWASP) at https://owasp.org .
The Node community is heavily influenced by the so-called “Unix philosophy,” which promotes (among other things) the creation of small, tightly focused modules that are designed to do one thing well. This approach has allowed Node to thrive as a development platform by fostering a large ecosystem of open source modules. PayPal has taken this philosophy to heart by structuring Kraken not as a single, monolithic framework, but rather as a collection of modules that extends and provides structure to Express-based applications. By taking this approach, PayPal has managed to contribute several modules to the Node ecosystem from which developers can benefit, regardless of whether they choose to use Kraken as a whole.
Kraken: http://krakenjs.com/
Meddleware: https://github.com/krakenjs/meddleware
Dust.js: www.dustjs.com
SuperAgent: https://github.com/visionmedia/superagent
SuperTest: https://github.com/visionmedia/supertest
Mocha: http://mochajs.org
Open Web Application Security Project (OWASP): https://owasp.org
The human mind gets used to strangeness very quickly if [strangeness] does not exhibit interesting behavior.
—Dan Simmons
MongoDB is a popular cross-platform document database, often lumped into the “NoSQL” classification with other nonrelational data stores such as CouchDB, Cassandra, RavenDB, and so forth. It is a popular choice for data storage among Node.js developers because its “records” are stored as plain JSON objects, and its query interface and stored functions are written in plain JavaScript.
Storing, accessing, and manipulating data in MongoDB are not terribly complex, but Node.js libraries such as Mongoose can help application developers map MongoDB documents onto application objects that have definite schemas, validations, and behavior—all concepts that are not (by design) parts of MongoDB. Mongoose implements the query interface native to MongoDB, but also gives developers a composable, fluent interface that simplifies portions of the query API.
Though MongoDB is not the direct subject of this chapter, it is necessary to establish a few basic concepts about how MongoDB works before delving into Mongoose. If you’re familiar with MongoDB already, feel free to skip the next section.
A relational database server hosts database schemas (sometimes just called databases), which encapsulate related entities like tables, views, stored procedures, functions, and so on. Database tables in turn contain tuples (also known as rows or records). A tuple is composed of a number of fields, each containing a value of a predetermined data type. The tuple is one-dimensional, and its definition (the data types its fields can hold) is determined at the table level. All tuples within a table, then, share the same structure, though their individual field values may differ. The names and data types of a tuple’s fields are referred to as the tuple’s schema.
RDBMS | MongoDB |
|---|---|
Server | Server |
Schema | Database |
Table | Collection |
Tuple | Document |
Field | Property |
Term | Definition |
|---|---|
Schema | Defines the data types, constraints, defaults, validations, and so forth for the properties of a document instance; enforced at the application level |
Model | Constructor function that creates or fetches document instances |
Document | Instance object created or fetched by a Mongoose model; will have Mongoose-specific properties and methods as well as data properties |
JSON object | Plain JavaScript object that contains only the data properties from a document |
Unlike RDBMS tuples, MongoDB documents are not one-dimensional. They are complete JSON objects that may contain other objects or arrays. In fact, documents within the same collection need not even have the same properties, because MongoDB collections are actually schemaless. A MongoDB collection can hold document objects of any shape or size (within MongoDB’s storage limits). In practice, though, collections tend to hold documents of similar “shape,” though some may have optional properties, or may contain properties that represent some arbitrary data. But in general, applications usually assume that data exists in particular “shapes,” so although MongoDB does not enforce document schemas, applications often do.
By default, MongoDB documents are automatically assigned a surrogate primary key called _id. This key has a special type (MongoDB’s ObjectId type) and is used as MongoDB’s primary collection index. MongoDB can use a different field as a primary key if directed. Additional fields can be added to secondary indexes within a collection, either as simple or compound keys.
Perhaps orders are never altered. If there is a mistake in an order—for example, the shipping address is wrong—the entire order gets re-created to offset the faulty order. The correct shipping address gets added to the new order.
If a customer changes a postal address, old orders won’t be updated with the new address, so there’s no data integrity issue at stake.
Maybe changing a postal address always happens within the customer domain, never in the order domain.
Perhaps a customer can override a shipping address with a “temporary” address (shipping a gift) that should not be added to the customer record.
If different postal metrics are derived from orders than from customers (e.g., a C-level executive wants to know how many orders were shipped to Missouri last month regardless of who actually lives in Missouri this month), that data is already segregated.
Maybe disk space is cheap and the velocity gained by not enforcing referential integrity outweighs any potential cost.
While foreign keys and referential integrity are critical to RDBMS databases, strong MongoDB document design can often render the issue moot.
Finally, though MongoDB’s query API may look a bit daunting to SQL practitioners, it quickly becomes obvious that, for the most part, looking for data involves the same concepts: selecting (find), filtering (where), applying compound conditions (and, or, in), aggregating (group), paging (skip, limit), and so on. How queries are composed and executed differs mostly in syntax.
Mongoose is an object modeling library for Node.js applications. To develop with Mongoose (and follow the examples in this chapter), you need to install Node.js and MongoDB on your platform of choice. The default installation procedure and configuration for both should be sufficient to run this chapter’s example code.
This chapter assumes that you are familiar with Node.js applications and modules and that you know how to install them with npm. A working knowledge of MongoDB will be very helpful, but it is not required to run the examples of this chapter, since interaction with MongoDB will mostly occur through Mongoose in the chapter examples. Some examples will demonstrate how to query MongoDB directly to verify the results of Mongoose operations. It is noteworthy that you need to have a functional understanding of MongoDB in order to get the most out of Mongoose for production-level environments.
Create a basic Mongoose schema that reflects the structured data in a JSON file.
Read the JSON file and import the data into MongoDB with a Mongoose model.
Run a basic web server that will use a Mongoose model to fetch data from MongoDB and deliver it to a web browser.
The first line of each listing that follows will show the file path in which the example code may be found. Subsequent examples will indicate whether a particular example file should be executed with Node.js in a terminal.
Mongoose is an object data mapper (ODM) , so at the heart of Mongoose data access are model functions that can be used to query the MongoDB collections they represent. A Mongoose model must have a name by which it can be referred and a schema that enforces the shape of the data it will access and manipulate. The code in Listing 9-4 creates an album schema that closely matches the JSON data in example-001/albums.json. Schemas will be covered in detail later, but it should be apparent that a schema defines the properties and their data types for a given Mongoose model. Finally, a model function is created by pairing a name (“Album”) with a schema. This model function is assigned to module.exports in the example-001/album-model.js file so that it can be imported into other modules as needed in a Node.js application.
A Mongoose schema defines the data structure for a model. The model function provides the query interface for working with stored document data. A model must have a name and a schema.
Connect to a running MongoDB server with Mongoose.
Read and parse the contents of the albums.json file.
Use the Album model to create documents in MongoDB.
Because the criteria argument (the first object passed to db.albums.find()) is empty in Listing 9-10, all records are returned. The projection object, however, specifies a single property to be returned by the query: composer. All other properties are excluded except for _id, which is returned by default and will always be included unless the projection parameter specifies otherwise.
Once the album data has been loaded into MongoDB, you can use the same model from Listing 9-4 to query that data.
The rest of this chapter will build on this Mongoose schema, model, and album data stored in the MongoDB database.
Mongoose schemas are simple objects that describe the structure of and data types in a MongoDB document. While MongoDB itself is schemaless, Mongoose enforces schemas for documents at the application level. Schemas are defined by invoking the Mongoose module’s Schema() function, passing it an object hash where the keys represent document properties and the values represent the data type for each property. The return value is an object of type Schema with additional helper properties and functions for expanding or augmenting the schema’s definition.
Mongoose itself provides two special object types: ObjectId and Mixed.
When a document is created in MongoDB, it is assigned an _id property that serves as a unique identifier for the record. This property uses MongoDB’s own ObjectId data type. Mongoose exposes this type via mongoose.Schema.Types.ObjectId. This type is rarely used directly. When querying a document by ID, for example, the string representation of the identifier is typically used.
When a schema property holds arbitrary data (remember, MongoDB is schemaless), it may be declared with the type mongoose.Schema.Types.Mixed. If a property is marked as Mixed, Mongoose will not track changes made against it. When Mongoose persists a document, it creates a query internally that only adds or updates properties that have changed, and since a Mixed property is not tracked, the application must inform Mongoose when it has changed. Documents created by Mongoose models expose a markModified(path) method that will force Mongoose to consider the property identified by the path argument as dirty.
Setting a Mongoose schema property to an empty object literal (one with no properties) will cause Mongoose to treat it as Mixed.
Finally, because Mongoose is a Node.js library, it takes advantage of Node’s Buffer type to store large blocks of binary data such as image, audio, or video assets. Because binary data can be quite large, many applications store URL references to binary assets located on a content delivery network such as Amazon’s Simple Storage Service (S3) instead of storing binaries in a data store such as MongoDB. Use cases differ across applications, however, and Mongoose schemas are flexible enough to support either approach.
Adding sensible default values to schema properties instructs Mongoose to fill in missing data when a document is created. This is useful for document properties that aren’t optional but typically hold some known value.
Adding a default to a property requires that the type assignment look a bit different. Notice that m: Number has become m: {type: Number, default: 0}. Normally, assigning an object hash to a property would cause the property to have a Mixed or object type, but the presence of the type property in the object literal short-circuits that process and tells Mongoose that the other key/value pairs in the hash are property settings.
Mongoose documents automatically acquire an indexed _id property when saved to MongoDB. Secondary indexes can be added to a schema, however, to enhance performance when querying against other fields.
Track title (simple)
Album composer (simple)
Album title (simple)
Album title + album composer (compound)
Album genre (simple)
Simple indexes are added at the property level by appending an index field to a property type declaration and setting it to true. Compound indexes, on the other hand, must be defined for the schema as a whole using the Schema.index() method . The object passed to index() contains property names that correspond to the schema properties to be indexed and a numeric value that may be either 1 or -1.
MongoDB sorts indexes in either ascending or descending order. Compound indexes are defined with a numeric value instead of a boolean value (like simple indexes) to indicate the order in which each field should be indexed. For simple indexes, the order doesn’t matter because MongoDB can search either way. But for compound indexes, the order is very important because it limits the kind of sort operations MongoDB can perform when a query uses a compound index. The MongoDB documentation covers compound indexing strategies in depth.
In Listing 9-20 a compound index for composer and title is added to the album schema in addition to simple indexes for both fields. It is entirely likely that a user will search for an album by composer, title, or both.
The schema’s path() method returns an instance of SchemaType, an object that encapsulates the definition of a schema’s property—in this case, the tracks property, which is an array of track objects for the album. The SchemaType.validate() method attaches a validation function to the schema’s property. The first argument is the actual validation function, which receives, as its only argument, the value to be validated. The second argument to validate() is the message that will be used if a validation error is raised.
When an album document is saved, this function will be executed as part of the Mongoose validation process, evaluating the tracks property to ensure that the album has at least one track.
Validation occurs whenever documents are persisted; that is, whenever Model.create() is called, or the save() method is called on a document instance. If validation fails, an error is passed as the first argument to a callback for each of these methods. (Documents will be discussed in detail later.)
Though MongoDB is a relationless data store, relationships between documents in collections can be created through informal references that act as foreign keys. The integrity enforcement and resolution of these foreign keys to objects is left entirely to the application, of course. Mongoose builds these informal relationships through population references—links between schemas that enable automatic eager loading (and manual lazy loading) of document graphs. To expand on the music application example, it is very likely that users will create their own personal album libraries. Because album documents can be large, it might be best to avoid duplicating album data in each library document. Instead, references will be created from library documents to individual albums, a kind of many-to-many relationship. When libraries are loaded by Mongoose, these references can be resolved so that full library object graphs are returned populated with album documents.
Each step in the import flow is annotated in Listing 9-28, but several steps involve concepts that have not yet been introduced.
Mongoose documents may all be cast to their ObjectIds. Mongoose is smart enough to perform this cast automatically, so adding album documents to the albums property will pass the schema check. Alternatively, the import script could pluck the _id property from each album document and place it into the albums array instead. The result would be identical.
Mongoose raises events on a schema object whenever particular MongoDB documents are validated, saved, or removed from a document collection. Events are raised before and after each one of these operations. Subscriptions to these events are assigned with a schema’s pre() and post() methods, respectively. A subscription is simply a function or middleware that receives arguments related to each event. Post-event middleware simply observes the document after the event is complete, but pre-event middleware may actually interrupt the document life cycle before an event is completely processed.
Pre-event middleware can execute in a synchronous or asynchronous manner. The code in Listing 9-34 is synchronous, which means that other middleware functions will be scheduled only after the duration summation has been completed. To change this behavior and schedule them all immediately, one after the next, the schema’s pre() method is called with an additional boolean argument that flags the handler function as asynchronous middleware.
Schedule the duration summation process for the next event loop pass.
Invoke next() to pass control to the next piece of middleware.
At some future point in time, signal that this middleware operation is complete by invoking done().
If an error is raised in a synchronous, pre-event middleware function, it should be passed as the only argument to next(). Errors raised during asynchronous functions, however, should be passed to done() instead. Any error passed to these callbacks will cause the operation that triggered the event to fail and will be delivered to the final operation callback (e.g., the callback passed to a document’s save() method).
Post-event middleware functions receive no control flow arguments, but instead receive a copy of the document as it stands after the event’s operation has completed.
A Mongoose model is a constructor function that creates document instances. These instances conform to a Mongoose schema and expose a collection of methods for document persistence. Models are associated with MongoDB collections. In fact, when a Mongoose document is saved, the collection to which it corresponds will be created if it does not already exist. By convention, models are named in the singular form of the noun they represent (e.g., Album), but collections are named in the plural form (e.g., albums).
Documents are more than just data: they may also include custom behavior. When document instances are created, Mongoose creates a prototype chain with copies of functions defined on the schema object’s methods property. Document methods defined in this way may access particular document instances with the this keyword.
Like instance methods, virtual getter and setter properties can be added to documents via the schema. These virtual properties act like normal data properties but are not persisted when the document is saved. They are useful for computing and returning values based on document data or for parsing data that contains, or can be converted to, values for other document properties.
The string argument passed to the Schema.virtual() method defines the document path where the property will reside once a document instance is created. Document virtuals may be assigned to subdocuments and nested objects as well by specifying the full path starting at the root document. For example, if the value of the composer property was an object with firstName and lastName properties, the virtual might live at composer.inverse instead.
When the album model is later created from the schema, any method on statics will be bound to the model. While the value of this in instance methods is the document itself, the value of the this keyword in static methods is the model constructor function (e.g., Album). Any function that can be called on the model, such as find() and create(), may be accessed in a static method.
The query examples in the next section do not use static model methods for encapsulation. This is done to simplify each example, though in a real maintainable application, it might be considered bad practice.
Mongoose queries are plain objects composed of zero or more properties that specify the parameters of the query. (An empty query object matches everything.) Properties on these criteria objects share MongoDB’s native query syntax. Models expose several different query methods that use criteria objects in order to filter and return Mongoose documents.
The Album.find() method will return a Mongoose Query object that exposes additional methods for manipulating the results of the find operation.
Model methods can be invoked in several ways. The first, shown in Listing 9-48, returns a Query object with a fluent interface that allows query options to be chained together until the Query.exec() method is called. The second method avoids the Query object altogether. If a callback is passed as the last argument to a model’s query method (e.g., find({}, function () {...})), the underlying query will be executed immediately and the error or result passed to the callback. For simple queries, the second method is more terse.
The first Query directive is Query.sort(), which accepts an object that uses MongoDB’s sorting notation. The properties in this object tell MongoDB which properties in the document should be used for sorts and in which direction each sort should be ordered (1 for ascending, -1 for descending). When the results in Listing 9-48 are fetched, they will be ordered first by composer, then by album title.
After Query.sort(), the Query.lean() method is invoked to instruct Mongoose to deliver plain JSON objects instead of Mongoose documents as results. By default, Mongoose will always fetch documents, which carry Mongoose-specific properties and methods for validating, persisting, and otherwise managing document objects. Since this route (and most routes in this file) simply serializes results and returns them to the client, it is preferable to fetch them as Plain Old JavaScript Objects (or JSON objects) populated only with data.
Once a query has been prepared, its exec() method is passed a callback to receive either an error or data from the Album.find() operation. The results will be an array of album objects that match whatever criteria (if any) was used to perform the query.
Several curl commands are shown in Listing 9-49 with various query string parameters. In each case the output is a serialized JSON array delivered from the web API.
The following examples use MongoDB identifiers that were generated on my computer. These identifiers will differ on your computer. You may use the mongo terminal client to discover the identifiers assigned to your MongoDB documents, as demonstrated in previous examples.
If validation fails, or if the album otherwise cannot be created, an error will be passed to the final callback and delivered to the client as an HTTP 500 Internal Server Error. If the album document is created, the data is passed back to the client as serialized JSON. Unlike previous routes where Query.lean() was used to ensure that only data is serialized, the album document returns its own data in JSON format when its toObject() method is called. This is the manual equivalent of the process that lean() performs in a query chain.
The album data in example-010/new-album.json lacks a releaseDate property, a condition that did not cause the schema validation to fail on import because releaseDate is not required. Indeed, releaseDate defaults to Date.now and, if queried with the mongo client, will be exactly that. Unfortunately, the album was not, in fact, released today, so it is necessary to create another route to update the newly minted album document.
Like Listing 9-51, a serialized JSON object is sent in the body of an HTTP request. This request is a PUT request, however, and includes the album identifier in the URL. The only data sent in the request body are the properties to be updated. It is unnecessary to send the full document across the wire because Mongoose will apply the deltas appropriately. Once the request body is deserialized, the album ID and updated fields are passed to findByIdAndUpdate(). If the update operation succeeds, the updated document will be passed to the final query callback, assuming no errors occur.

Library population results
At this point the album and library routes consist of basic CRUD operations (create, read, update, and delete) that form the basis of many web APIs, but more could be done to make the API robust. MongoDB supports a number of helpful query operators that serve to filter data in specific ways.
To find albums that were released before, and up to, a specific date, the $lte (“less than or equal”) operator could be used. Likewise, the $gte operator would find albums released from a specific date onward. To find all albums that were released on any date but the date provided, the $ne (“not equal”) operator would filter accordingly. Its inverse, $eq, if used alone is functionally equivalent to setting the releaseDate value on the criteria object directly.
To keep the response small, the Query.select() method is invoked before the query is executed. This method limits the properties returned from each result object. In this case, the query selects only the composer, title, and releaseDate properties, all included in a space-separated string. All other properties are ignored.
The _id property is the only property that may be specified for exclusion when an inclusive select (one that specifies the properties to be fetched) is performed. Otherwise, excluded and included properties may not be mixed. A query is either selecting only specific properties or excluding only specific properties, but not both. If any property in a Query.select() string is negated (except for _id), all specified properties must be negated or an error will be raised.
The $nin operator does the exact opposite: it will match only if the property value is not included in the specified set.
To determine what constitutes a “related” genre, the criteria object selects albums that have the principal genre as an element in each document’s genre array. It then compiles a list of all other genres that have been assigned to albums in the result set and returns that list to the client. Though Album.genre is an array, MongoDB knows to traverse it for values that match the elements in the $in operator. The Query.select() method excludes the _id property and includes only the genre property, since it alone contains the data in which this route is interested.
Fortunately, the $and and $or operators can be used to construct a criteria object that will produce the desired set of albums. Both operators accept an array of criteria objects that may contain simple queries or complex queries that also contain $and, $or, or any other valid query operators. The $and operator performs a logical AND operation using each criteria object in its array, selecting only documents that match all specified criteria. In contrast, the $or operator performs a logical OR operation, selecting documents that match any of its criteria.
Operator | Description |
|---|---|
$not, $nor | Negative logical operators that combine query clauses and select documents that match accordingly |
$exists | Selects documents where the specified property exists (remember, MongoDB documents are technically schemaless) |
$type | Selects documents where the specified property is of a given type |
$mod | Selects documents where a modulo operator on a specified field returns a specified result (e.g., select all albums where the price is divisible evenly by 3.00) |
$all | Selects documents with an array property that contains all specified elements |
$size | Selects documents with an array property of a given size |
$elemMatch | Selects documents where a subdocument in an array matches more than one condition |
MongoDB is schemaless and extremely flexible by design, but application developers often add constraints on data in application code to enforce business rules, ensure data integrity, conform to existing application abstractions, or achieve any number of other goals. Mongoose recognizes and embraces this reality, and rests snugly between application code and the data store.
Mongoose schemas add constraints to otherwise free-form data. They define the shape and validity of the data to be stored, enforce constraints, create relationships between documents, and expose the document life cycle via middleware.
Models provide a full but extensible query interface. Criteria objects that conform to MongoDB query syntax are used to find specific data. Chainable query methods give developers control over the property selection, reference population, and whether full documents or plain JSON objects are retrieved. Custom static methods that encapsulate complicated criteria objects and more involved queries can be added to models to keep application concerns properly segregated.
Finally, Mongoose documents can be extended with custom instance methods that contain domain logic and custom getters and setters that aid in computed property manipulation.
The report of my death was an exaggeration.
—Samuel Langhorne Clemens (Mark Twain)
In this chapter, we will explore two libraries that work together to ease many of the difficulties that Node.js developers often encounter when working with relational databases. The first, Knex, provides a flexible and consistent interface for interacting with several well-known SQL platforms such as MySQL and PostgreSQL. The second, Bookshelf, builds on this foundation by providing developers with a powerful object-relational mapping (ORM) library that simplifies the process of modeling the entities that comprise an application’s data structure, along with the various relationships that exist between them. Readers who are familiar with Backbone.js and its emphasis on structuring data within Models and Collections will quickly find themselves at home with Bookshelf, as the library follows many of the same patterns and provides many of the same APIs.
Create SQL queries with the Knex query builder
Create complex database interactions without resorting to nested callback functions, with the help of promises
Ensure the integrity of your application’s data through the use of transactions
Manage changes to your database’s schema with the help of Knex migration scripts
Bootstrap your database with sample data using Knex seed scripts
Define one-to-one, one-to-many, and many-to-many relationships between Bookshelf models
Use eager loading to efficiently retrieve complex object graphs based on Bookshelf relationships
Most of the examples in this chapter make heavy use of the promise-based and Underscore-inspired APIs that both Bookshelf and Knex provide.
A promise-based interface that allows for cleaner control of asynchronous processes
A stream interface for efficiently piping data through an application as needed
Unified interfaces through which queries and schemas for each supported platform can be created
Transaction support
Create, implement, and (when necessary) revert database migrations, scripted schema changes that can then be committed with an application’s source code
Create database “seed” scripts, a consistent method by which an application’s database can be populated with sample data for local development and testing
Each of these subjects will be covered in more detail throughout this chapter.
SQLite implements a self-contained, serverless database within a single file on your disk and requires no additional tools. If you don’t have access to a database server such as MySQL at the moment, the sqlite3 library will provide you with a quick and easy way to begin experimenting with Knex without requiring additional setup. The examples referenced throughout this chapter will use this library.
As you can see, the configuration settings required for SQLite3 are quite a bit simpler than those required for other, more full-featured solutions. Instead of providing connection settings, we simply provide the name of a file (db.sqlite) in which SQLite will store its data.
The primary focus of Knex is on providing developers with a unified interface through which they can interact with multiple, SQL-based databases without having to worry about minor variations in syntax and response format that exist between each of them. To that end, Knex provides a number of methods, most of which fall into one of two categories: query builder methods and interface methods.
While the example shown in Listing 10-5 demonstrates the basic method by which SQL queries can be created with Knex, it does little to convey the true value of the library. That value should start to become more apparent as we take a look at the various interface methods that Knex provides. It is with these methods that we can begin to submit our queries and process their resulting data.
Knex provides a number of interface methods that allow us to submit and process our queries in several convenient ways. In this section, we’ll take a look at two of the most useful approaches that are available to us.
Callback functions allow us to defer the execution of a particular sequence of code until the appropriate time. Such functions are easy to understand and implement. Unfortunately, they are also very difficult to manage as applications grow in complexity. Imagine a scenario in which additional asynchronous processes must run after the initial response is received in Listing 10-6. To do so would require the use of additional, nested callback functions. As additional asynchronous steps are added to this code, we begin to experience what many developers refer to as “callback hell” or the “pyramid of doom,” terms that describe the unmaintainable mass of spaghetti code that frequently results from such an approach.
Cities within a particular state are selected.
Users who live within the returned cities are selected.
Bookmarks for each of the returned users are selected.
Thanks to the promise-based interface provided by Knex, at no point does our code ever reach beyond one level of indentation, thereby ensuring that our application remains easy to follow. More importantly, should an error occur at any point during this process, it would be conveniently caught and handled by our final catch statement.
JavaScript promises are a powerful tool for writing complex, asynchronous code in a manner that is easy to follow and maintain.
One of the biggest benefits to writing applications with Node.js is the platform’s ability to execute I/O-intensive procedures in a very efficient manner. Unlike synchronous languages such as PHP, Python, or Ruby, Node.js is capable of handling thousands of simultaneous connections within a single thread, allowing developers to write applications capable of meeting enormous demands, while using minimal resources. Node.js provides several important tools for accomplishing this feat, one of the most important of which is streams.
In this example, we use the readFile() method of the native fs library available within Node.js to read the contents of a file. Once that data is loaded into memory (in its entirety), it is then passed to our callback function for further processing. This approach is simple and easily understood. However, it’s not very efficient, as our application must first load the entire contents of the file into memory before passing it back to us. This isn’t a terrible problem for smaller files, but larger files may begin to cause issues, depending on the resources available to the server that happens to be running this application.
In this example, we combine the power of the streaming and promise-based interfaces provided by Knex. When a callback function is passed to the library’s stream() method , that callback function receives the generated promise as opposed to being returned directly. Instead, a promise is returned, which is resolved once the stream is complete.
The streaming interface provided by Knex is compatible with MySQL, PostgreSQL, and MariaDB databases.
One of the biggest benefits to using ACID-compliant, relational databases lies in their ability to group multiple queries into a single unit of work (i.e., a “transaction”) that will either succeed or fail as a whole. In other words, should a single query within the transaction fail, any changes that may have occurred as a result of previously run queries within the transaction would be reverted.
By way of an example , consider a financial transaction that occurs at your bank. Suppose you wanted to send $25 to your cousin on her birthday. Those funds would first have to be withdrawn from your account and then inserted into your cousin’s account. Imagine a scenario in which the application enabling that exchange of funds were to crash for any number of reasons (e.g., a faulty line of code or a larger system failure) after those funds were removed from your account, but before they were inserted into your cousin’s account. Without the safety net provided by transactions, those funds would have essentially vanished into thin air. Transactions allow developers to ensure that such processes only ever happen in full—never leaving data in an inconsistent state.
The acronym ACID (Atomicity, Consistency, Isolation, Durability) refers to a set of properties that describe database transactions. Atomicity refers to the fact that such transactions can either succeed in their entirety or fail as a whole. Such transactions are said to be “atomic”.
Previous examples within this chapter have demonstrated the process of creating and submitting database queries with Knex. Before we continue, let’s review another example that does not take advantage of transactions. Afterward, we’ll update this example to take advantage of the peace of mind that transactions provide.
The total funds currently available within the source account are determined.
If insufficient funds are available to complete the process, an error is thrown.
The funds to be transferred are deducted from the source account.
The total funds currently available within the destination account are determined.
If the destination account cannot be found, an error is thrown.
The funds to be transferred are added to the destination account.
If you haven’t spotted the mistake already, a glaring problem presents itself at step 5. In the event that the destination account cannot be found, an error is thrown, but at this point the funds to be moved have already been deducted from the source account! We could attempt to solve this problem in a number of ways. We could catch the error within our code and then credit the funds back to the source account, but this would still not account for unforeseen errors that could arise due to network problems or in the event that our application server were to lose power and completely crash in the middle of this process.
As you can see, the transaction-aware example shown in Listing 10-14 largely resembles that shown in Listing 10-13, but it does differ in one important way. Instead of creating our query by calling builder methods directly on the knex object, we first initiate a transaction by calling knex.transaction() . The callback function that we provide is then passed a “transaction-aware” stand-in (trx) from which we then begin to create our series of queries. From this point forward, any queries that we create from the trx object will either succeed or fail as a whole. The knex.transaction() method returns a promise that will be resolved or rejected once the transaction as a whole is complete, allowing us to easily integrate this transaction into an even larger series of promise-based actions.
Just as an application’s source code is destined to change over time, so too is the structure of the information that it stores. As such changes are made, it is important that they be implemented in a way that can be repeated, shared, rolled back when necessary, and tracked over time. Database migration scripts provide developers with a convenient pattern for accomplishing this goal.
After running this command, a file (knexfile.js) will be created with contents similar to those shown in Listing 10-16. You should alter the contents of this file as needed. Whenever a Knex migration script is run, Knex will determine its connection settings based on the contents of this file and the value of the NODE_ENVIRONMENT environment variable.
On OS X and Linux, environment variables are set from the terminal by running export ENVIRONMENT_VARIABLE=value. The command to be used within the Windows command line is set ENVIRONMENT_VARIABLE=value.
Knex migration scripts are stored in a migrations folder at the root level of a project. If this directory does not exist, Knex will create it for you. Knex automatically prepends a timestamp to the file name of migration scripts, as shown in Listing 10-18. This ensures that a project’s migrations are always sorted by the order in which they were created.
It is now up to us to modify the up and down functions within our newly created migration script. Let’s take a look at two alternative approaches.
Schema builder methods are useful, in that they allow developers to easily define schemas in a way that can be applied to each of the platforms supported by Knex. They also require a minimal amount of knowledge regarding raw SQL queries, making it possible for developers with little experience working directly with SQL databases to get up and running quickly. That said, schema builder methods are also limiting. To provide a generic interface for defining database schemas that work across multiple platforms, Knex must make certain decisions for you—a fact that you may not be comfortable with. Developers with more experience working directly with SQL databases may wish to bypass the schema builder methods entirely, opting instead to craft their own SQL queries. This is easily accomplished, as we are about to see.
The example shown in Listing 10-20 makes use of an additional library that is unrelated to Knex: multiline. The multiline library is quite useful because it allows us to define large chunks of text that span multiple lines without requiring that each line end with a continuation character.

The knex_migrations table used by Knex to track which migration scripts have already been applied to your database
By default, Knex saves newly created seed scripts to the seeds folder at the root path of your project. You can customize this folder by modifying the contents of your project’s knexfile.js configuration file (see Listing 10-16).
Seed scripts are always run in alphabetical order. If the order in which your seeds are run is important, take care to name them appropriately to ensure they run in the desired order.
Create classes (“models”) to represent the various tables used within an application’s database
Extend models with custom behavior unique to the needs of their application
Define complex relationships between models (one-to-one, one-to-many, many-to-many)
Easily navigate through the various relationships that exist between models without resorting to complex SQL queries, with the help of “eager loading”
Developers who are familiar with Backbone will quickly find themselves at home with Bookshelf, as it follows many of the same patterns and implements many of the same APIs. You could easily describe Bookshelf as “Backbone for the server,” and you wouldn’t be far off base.

Here, the relationship between users and accounts (an account has one or more users, users belong to accounts) is described via the account_id foreign key column within the users table
This approach to storing information is powerful and serves as the predominant method by which applications store data, for many good reasons (all of which extend well beyond the scope of this book). Unfortunately, this approach is also at odds with the object-oriented approach with which most applications tend to view data.
Object-relational mapping (ORM) tools such as Bookshelf allow developers to interact with the flat tables of information stored within relational databases as a series of interconnected objects, with which they can interact and navigate through to achieve some desired goal. In effect, ORM libraries provide developers with a “virtual object database” that allows them to more easily interact with the flat records contained within relational database tables.
A Bookshelf model can be thought of as a class that, when instantiated, represents a record within a database. In their simplest form, Bookshelf models serve as data containers, providing built-in functionality for getting and setting attribute (i.e., column) values and for creating, updating, and destroying records. As we’ll soon see, however, Bookshelf models become much more useful when we extend them with our own custom methods and define the relationships that exist between them.
In their simplest state, Bookshelf models do little more than serve as containers for records within a database, providing built-in methods for reading and writing attribute values and performing save or destroy operations. While this is useful, Bookshelf models begin to reach their full potential only when we begin to extend them with their own unique behavior as befitting the needs of our application.
The invokeThen() method demonstrated in this example returns a promise of its own, which will be resolved only after all the calls to sendEmail() on our collection’s models have themselves been resolved. This pattern also provides us with a convenient method for interacting with multiple models simultaneously.
Within this example’s overridden toJSON() method, we first call the prototype’s toJSON() method, giving us the data that this method would have originally returned, had it not been overwritten. We then strip out the data we wish to hide, add some additional information of our own, and return it.
A common scenario in which this pattern is often seen involves the use of a User model, within which sensitive password information is held. Modifying the model’s toJSON() method to automatically strip out such information, as shown in Listing 10-34, helps to prevent this information from unintentionally leaking out over an API request.
An object of instance properties to be inherited by created instances of the model
An object of class properties to be assigned directly to the model
Class-level properties provide a convenient location in which we can define various helper methods related to the model in question. In this contrived example, the getRecent() method returns a promise that resolves to a collection containing every user who has signed in within the last 24 hours.
Having the ability to create models that extend across multiple levels of inheritance provides some useful opportunities. Most of the applications in which we use Bookshelf follow the lead shown in Listing 10-36, in which a Base model is created from which all other models within the application extend. By following this pattern, we can easily add core functionality to all models within our application simply by modifying our Base class. In Listing 10-36, the User model (along with every other model that extends from Base) will inherit the Base model’s foo() method.
ORM libraries such as Bookshelf provide convenient, object-oriented patterns for interacting with data stored in flat, relational database tables. With Bookshelf’s help, we can specify the relationships that exist between our application’s models. For example, an account may have many users, or a user may have many bookmarks. Once these relationships have been defined, Bookshelf models open up new methods that allow us to more easily navigate through these relationships.
Association | Relationship Type | Example |
|---|---|---|
One-to-one | hasOne | A User has a Profile |
One-to-one | belongsTo | A Profile has a User |
One-to-many | hasMany | An Account has many Users |
One-to-many | belongsTo | A User belongs to an Account |
Many-to-many | belongsToMany | A Book has one or more Authors |
In the following sections, you will discover the differences between these relationships, how they are defined, and how they can best be put to use within an application.
A one-to-one association is the simplest form available. As its name suggests, a one-to-one association specifies that a given model is associated with exactly one other model. That association can take the form of a hasOne relationship or a belongsTo relationship, based on the direction in which the association is traversed.

The database schema behind our one-to-one relationships
In Listing 10-38, an instance of the User model is retrieved. When fetched, the default behavior of a Bookshelf model is to retrieve only information about itself, not about its related models. As a result, in this example we must first load the model’s related Profile via the load() method, which returns a promise that is resolved once the related model has been fetched. Afterward, we can reference this user’s profile via the user’s related instance method.
The one-to-many association forms the basis for the most commonly encountered relationships. This association builds on the simple one-to-one association we just saw, allowing us to instead associate one model with many other models. These relationships can take the form of a hasMany or a belongsTo relationship, as we will soon see.

The database schema behind our one-to-many relationships

A many-to-many association made possible through the use of a third join table. In this example, an author can write multiple books, and a book can have multiple authors.
A single foreign key column, as seen in previous examples (see Figure 10-5), would not suffice here. In order to model this relationship, a third join table (authors_books) is required, in which multiple relationships for any given record can be stored.
That a third join table exists, which derives its name from that of the two related tables, separated by an underscore, and ordered alphabetically. In this example: authors_books
That the column names used within your join table are derived from the singular versions of the two related tables, followed by _id. In this example: author_id and book_id
If you were to quickly survey the database landscape over the past several years, it would be easy to walk away with the impression that so-called “NoSQL” storage platforms have largely supplanted the old guard of relational databases such as MySQL and PostgreSQL, but nothing could be further from the truth. Much like Mark Twain’s prematurely reported death in 1897, the death of the relational database is also an exaggeration.
Relational databases offer a number of compelling features, the vast majority of which lie far outside the scope of this chapter. Many wonderful books are available that devote themselves entirely to this subject, and we encourage you to read a few of them before making critical decisions regarding how and where a project stores its information. That said, a key feature to look for in such systems (and one which was covered earlier in the chapter) is support for transactions: the process by which multiple queries can be grouped into a single unit of work that will either succeed or fail as a whole. The examples involving a financial exchange that we looked at in Listings 10-13 and 10-14 demonstrated the important role this concept has in mission-critical applications.
The platform-agnostic API provided by Knex, combined with its promise-based interface, transaction support, and migration manager, provides developers with a convenient tool for interacting with relational databases. When paired with its sister application, Bookshelf, an ORM that is instantly familiar to those with prior Backbone experience, a powerful combination is formed that simplifies the process of working with complex data.
Knex: http://knexjs.org
Bookshelf: http://bookshelfjs.org
Backbone.js: http://backbonejs.org
Underscore.js: http://underscorejs.org
MySQL: www.mysql.com
PostgreSQL: www.postgresql.com
MariaDB: http://mariadb.org
SQLite: www.sqlite.org
Multiline: https://github.com/sindresorhus/multiline
Always something new, always something I didn’t expect, and sometimes it isn’t horrible.
—Robert Jordan
We are now familiar with libraries and frameworks such as KnexJS and RequireJS, among many others. This chapter discusses Async.js, a callback-driven JavaScript library that provides a suite of powerful functions to manage asynchronous collection manipulation and control flow.
When it comes to asynchronous programming, it is often a standard practice to adapt a callback-oriented approach. Async.js library too embraces the callback-driven approach to asynchronous programming, however, but in such a way that many of the downsides presented by callback-driven code (such as nested callbacks) are avoided.
The first argument to each control flow function is typically an array of functions to be executed as tasks. Task function signatures will vary a bit based on the exact Async.js control flow function used, but they will always receive a Node.js-style callback as a last argument.
The last argument to each control flow function is a final callback function to be executed when all tasks are complete. The final control flow function also receives a Node.js-style callback and may or may not receive additional arguments as well.
A Node.js-style callback is simply a callback function that always expects an error as its first argument. When the callback is invoked, either an error object is passed as its only argument or null is passed in for the error value and any further values are passed in as additional arguments.
The rest of the chapter will examine a number of control flow functions, and how they vary, if at all, from this general pattern. Since all flows organize tasks and handle errors and values in a similar way, it becomes easier to understand each by contrast.
The meaning of async in Async.js relates to organizing asynchronous operations. The library itself does not guarantee that task functions execute asynchronously. If a developer uses Async.js with synchronous functions, each will be executed synchronously. There is one semi-exception to this rule. The async.memoize() function (which has nothing to do with control flow) makes a function cacheable, so that subsequent invocations won’t actually run the function but will return a cached result instead. Async.js forces each subsequent invocation to be asynchronous because it assumes that the original function was itself asynchronous.
A sequential flow is one in which a series of steps must be executed in order. A step may not start until a preceding step finishes (except for the first step), and if any step fails, the flow fails as a whole. The functions in Listing 11-2 are the steps for changing a fictitious user’s password, the same scenario used to introduce sequential flows in Chapter 12. These steps are slightly different, however.
First, each is wrapped in a factory function that takes some initial data and returns a callback-based function to be used as a step in the sequential flow.
Because these steps are asynchronous, they can’t be invoked one at a time in the same way that synchronous functions can be called. But Async.js tracks the executing of each step internally, invoking the next step only when the previous step’s callback has been invoked, thus creating a sequential flow. If any step in the sequential flow passes an error to its callback, the series will be aborted and the final series callback will be invoked with that error. When an error is raised, the results value will be undefined.
For the rest of this chapter, we’ll be using factory functions instead of bind(), but developers are free to choose whatever approach feels most natural to them.
Sometimes it is helpful to run independent tasks in parallel and then aggregate results after all tasks are finished. JavaScript is an asynchronous language, so it has no true parallelism, but scheduling long, nonblocking operations in succession will release the event loop to handle other operations (like UI updates in a browser environment or handling additional requests in a server environment). Multiple asynchronous tasks can be scheduled in one turn of the event loop, but there is no way to predict at which future turn each task will complete. This makes it difficult to collect the results from each task and return them to calling code. Fortunately, the async.parallel() function gives developers the means to do just that.
In Listing 11-6, Async.js is imported into a fictitious web page with a standard <script> tag. Tasks are scheduled using the async.parallel() function , which, like async.series() , accepts an array of task functions to be executed and a final callback function that will receive an error or the aggregated results. Parallel tasks are simply functions that accept a single callback argument that should be invoked once the asynchronous operation within a task function is completed. All callbacks conform to the Node.js callback convention.
The getUser() function in Listing 11-6 is a factory that accepts a userID argument and returns a function that accepts a conventional Node.js-style callback. Because getUSStates() has no actual arguments, it need not be wrapped in a factory function but is used directly instead.
The Async.js library will iterate over each task in the tasks array, scheduling them one after the other. As each task completes, its data is stored, and once all tasks have finished, the final callback passed to async.parallel() is invoked.
Results are sorted according to the order of tasks passed to async.parallel(), not the order in which tasks are actually resolved. If an error occurs in any parallel task, that error will be passed to the final callback, all unfinished parallel tasks will be ignored once they complete, and the results argument in the final callback will be undefined.
When tasks in a series each depend on a value from a preceding task, a pipeline flow (or waterfall) is needed. Listing 11-7 represents tasks for a fictitious corporate rewards program in which a user’s age is calculated (based on date of birth), and if the user’s age meets certain thresholds, the user is awarded a cash prize.
The getUser() factory function accepts a userID and returns another function that, when invoked, looks up a user record. It passes the user record to its callback.
The calcAge() function accepts a user argument and invokes its callback with the calculated age of the user.
The reward() function accepts a numeric age argument and invokes its callback with the selected reward if the age meets certain thresholds.
Like async.series() and async.parallel(), an error passed to a callback in any waterfall task will immediately halt the pipeline and invoke the final callback with the error.
Pipelines are so helpful for processing data that async.seq() will take a series of functions, just like async.waterfall(), and combine them into a single, reusable pipeline function that can be called multiple times. This could be done manually, of course, by using a closure to wrap async.waterfall(), but async.seq() is a convenience function that saves developers the trouble.
Creating a pipeline with async.seq() is very similar to using async.waterfall(), as shown in Listing 11-11. The primary difference is that async.seq() does not invoke the steps immediately but returns a pipeline() function that will be used to run the tasks later. The pipeline() function accepts the initial arguments that will be passed to the first step, eliminating the need for factory functions or binding values to the first step when the pipeline is defined. Also, unlike most other async functions, async.seq() is variadic (accepts a varying number of arguments). It does not accept an array of tasks like async.waterfall(), but instead accepts each task function as an argument.
Flows that repeat until some condition is met are called loops. Async.js has several looping functions that help coordinate the asynchronous code to be executed and the conditions to be tested within them.
The first two functions, async.whilst() and async.doWhilst(), parallel the well-known while and do/while looping constructs in many programming languages. Each loop runs while some condition evaluates to true. Once the condition evaluates to false, the loops halt.
The async.whilst() and async.doWhilst() functions are nearly identical, except that async.whilst() performs the condition evaluation before any code in the loop is run, whereas async.doWhilst() executes one iteration of the loop before performing evaluating the condition. Looping code in async.doWhilst() is guaranteed to run at least once, whereas looping code in async.whilst() may not run at all if the initial condition is false.
Closely related to the async.whilst() and async.doWhilst() functions are the async.until() and async.doUntil() functions, which follow similar execution patterns but, instead of performing a loop when some condition is true, perform loops until some condition tests false.
The async.doUntil() function behaves like async.doWhilst() : it runs the loop first before evaluating the test condition. Its signature also swaps the order of the test condition function and the looping function.
A common use case for loops is the retry loop, where a task is attempted up to a given number of times. If the task fails but hasn’t met the retry limit, it is executed again. If the retry limit is met, the task is aborted. The async.retry() function simplifies this process by handling the retry logic for developers. Setting up a loop is as simple as specifying a retry limit, a task to execute, and a final callback that will handle errors or receive a result.
Infinite loops are bad news in synchronous programming because they arrest the CPU and prevent any other code from executing. But asynchronous infinite loops don’t suffer from this downside because, like all other code, they are scheduled for future turns of the event loop by the JavaScript scheduler. Other code that needs to be run can “butt in” and request to be scheduled.
An infinite loop can be scheduled with async.forever(). This function takes a task function as its first argument and a final callback as its second. The task will continue to run indefinitely unless it passes an error to its callback. Scheduling asynchronous operations back to back using setTimeout() with a wait duration of 0 or setImmediate() can create near nonresponsive code in a loop, so it is best to pad each asynchronous task with a longer wait duration, at least in the hundreds of milliseconds.
The last type of control flow this chapter covers is batching. Batches are created by partitioning some data into chunks, and then operating on each chunk one at a time. Batches have some threshold that defines how much data can be put into a chunk. Data added to a batch flow after work has commenced on a chunk is queued until work is complete, then gets processed in a new chunk.
The queue will emit a number of events at certain points in its life cycle. Functions may be assigned to corresponding event properties on the queue object to handle these events. These event handlers are optional; the queue will operate correctly with or without them.
The first time the queue has reached the maximum number of active workers, it will invoke any function assigned to queue.saturated. When the queue is handling all items and no other items are queued, it will call any function assigned to queue.empty. Finally, when all workers have completed and the queue is empty, any function assigned to queue.drain will be called. The functions in Listing 11-19 handle each of these raised events.
The empty and drained events differ subtly. When empty is triggered, workers may still be active though no items remain in the queue. When drained is triggered, all workers have ceased and the queue is completely empty.
The async.cargo() function is similar to async.queue() in that it queues up items to be processed by some task function. They differ, however, in how the workload is divided. async.queue() runs multiple workers up to a maximum concurrency limit—its saturation point. async.cargo() runs a single worker at a time, but splits up the queued items to be processed into payloads of a predetermined size. When the worker is executed, it will be given one payload. When it has completed, it will be given another, until all payloads have been processed. The saturation point for cargo, then, is when a full payload is ready to be processed. Any items added to the cargo after the worker has started will be grouped into the next payload to be processed.
A cargo is created by supplying the task function as the first argument to async.cargo() and a maximum payload size as the second. The task function will receive an array of data (with a length up to the maximum payload size) to be processed and a callback to be invoked once the operation is complete.
The cargo object has the same event properties as the queue object, shown in Listing 11-21. The main difference is that the cargo’s saturation limit is reached once a maximum number of payload items has been added, at which point the worker will commence.
Both async.queue() and async.cargo() schedule the task function to run in the next immediate tick of the event loop. If items are added to a queue or cargo synchronously, one after the other, then the thresholds of each will be applied as expected; the queue will throttle the maximum number of workers, and the cargo will divide the maximum number of items to be processed. If items are added to each asynchronously, however—if items are added after the next immediate turn of the event loop—the task functions may be invoked at less than their maximum capacities.
To guarantee that the maximum thresholds are met for both queue and cargo, push items to each synchronously.
Flow | Async.js Function(s) |
|---|---|
Sequential | async.series() |
Parallel | async.parallel() |
Pipeline | async.waterfall(), async.seq() |
Loop | async.whilst()/async.doWhilst(), async.until()/async.doUntil() |
async.retry(), async.forever() | |
Batch | async.queue(), async.cargo() |
Sequential and parallel flows allow developers to execute multiple independent tasks, then aggregate results as needed. Pipeline flows can be used to chain tasks together, where the output of each task becomes the input of a succeeding task. To repeat asynchronous tasks a given number of times, or according to some condition, looping flows may be used. Finally, batching flows are available to divide data into chunks to be processed asynchronously, one batch after the next.
By cleverly organizing asynchronous function tasks, coordinating the results of each task, and delivering errors and/or task results to a final callback, Async.js helps developers avoid nested callbacks and brings traditional synchronous control flow operations into the asynchronous world of JavaScript.
You must be the kind of [person] who can get things done. But to get things done, you must love the doing, not the secondary consequences.
—Ayn Rand
JavaScript is a pragmatic utility language, useful in no small part because of its simple APIs and sparse type system. It is an easy language to learn and master because its surface area is so small. And while this characteristic lends itself nicely to productivity, sadly it means that JavaScript types have historically lacked advanced features that would make the language stronger, such as functional iteration constructs native to collections and hashes.
To fill this gap, Jeremy Ashkenas created a library in 2009 called Underscore.js, a collection of over 100 functions used to manipulate, filter, and transform hashes and collections. Many of these functions, such as map() and reduce(), embody concepts common to functional languages. Others, like isArguments() and isUndefined(), are specific to JavaScript.
As the presence of Underscore became ubiquitous in many web applications, two exciting things happened. First, the ECMAScript 5 specification was published in the same year. It features a number of Underscore-like methods on native JavaScript objects such as Array.prototype.map(), Array.prototype.reduce(), and Array.isArray(). While ECMAScript 5 (and to a lesser degree ECMAScript 6 and 7) expands the APIs of several key types, it only includes a fraction of the functionality that Underscore.js provides.
ECMAScript 5 | Underscore/Lodash |
|---|---|
Array.prototype.every() | all()/every() |
Array.prototype.filter() | select()/filter() |
Array.prototype.forEach() | each()/forEach() |
Array.isArray() | isArray() |
Object.keys() | keys() |
Array.prototype.map() | map() |
Array.prototype.reduce() | inject()/foldl()/reduce() |
Array.prototype.reduceRight() | foldr()/reduceRight() |
Array.prototype.some() | some() |
ECMAScript 6 | Underscore/Lodash |
Array.prototype.find() | find() |
Array.prototype.findIndex() | findIndex() |
Array.prototype.keys() | keys() |
ECMAScript 7 | Underscore/Lodash |
Array.prototype.contains() | include()/contains() |
Because Underscore and Lodash share an API, Lodash can be used as a drop-in replacement for Underscore. The inverse isn’t necessarily the case, however, because of the extra functionality that Lodash supplies. For example, while both Underscore and Lodash have a clone() method, only Lodash implements a cloneDeep() method. Often developers choose Lodash over Underscore because of these extra features, but the performance benefit is tangible as well. According to a function-by-function performance benchmark, Lodash is 35% faster on average than Underscore. It achieves this performance gain by favoring simple loops over native delegation for functions like forEach(), map(), reduce(), and so forth.
This chapter focuses mostly on features of Underscore and Lodash that are not already (or are scheduled to be) implemented in JavaScript (the functions in Listings 12-1 and 12-2). Mozilla’s excellent documentation covers each of the native functions, and the Underscore and Lodash API documentation covers each of their implementations as well.
But Underscore and Lodash offer a great deal more than just a few handy functions for objects and collections, several of which will be explored in this chapter.
For brevity, the remainder of this chapter simply refers to Underscore, but understand that, unless otherwise noted, Underscore and Lodash are interchangeable.
In essence, Lodash provides more consistent iteration support for arrays, strings, and objects. As compared to Underscore, Lodash is more of a superset—it offers better API behaviors, features such as AMD support, deep merge, and more.
Other than that, Lodash (also written as Lo-Dash) is more flexible and has been performance tested to run in Node, PhantomJS, and other libraries/frameworks. If you are familiar with Backbone.js, it might be a better idea to use Lodash as it comes with multiple Backbone boilerplates by default.
Lastly, Lodash is more frequently updated as compared to Underscore.
Underscore may be directly imported as a library in the web browser or any server-side JavaScript environment, such as Node.js. It has no external dependencies.
You can download the Underscore.js script directly from the Underscore web site ( http://underscorejs.org ) or install it with a package manager like npm, Bower, or Component.
In the browser, you can include Underscore directly as a script or load it with an AMD- or CommonJS-compatible module loader (such as RequireJS or Browserify). In Node.js the package is simply required as a CommonJS module.
All Underscore functions live on the _ (“underscore”) object. Because Underscore is a utility library, it holds no state other than a handful of settings (but we’ll cover more on that later in the chapter). All functions are idempotent, which means passing a value to any function multiple times will yield the same result each time. Once the Underscore object is loaded, it may be used immediately.
Underscore’s utility functions operate mostly on collections (arrays and array-like objects, such as arguments), object literals, and functions. Underscore is most commonly used to filter and transform data. Many Underscore functions complement each other and can work together to create powerful combinations. Because this can be so useful, Underscore has built-in support for function chains that create terse pipelines that apply multiple transformations to data at once.
Pieces of data in a collection often share similar schemas, yet have an identifying attribute that makes each unique. It can be helpful to distinguish these two types of relationships in a set of data—commonality and individuality—in order to quickly filter and work with a subset of objects that matches aggregation criteria.
Underscore has a number of functions that perform these tasks, but three specific functions can be tremendously beneficial when working with collections: countBy(), groupBy(), and indexBy().
If one or more objects in the collection lack the property to be tested, the final result object will contain an undefined key paired with the number of those objects as well.
The groupBy() function may also use an iterator function as its second argument (instead of a property name) if a greater degree of control is required to categorize elements.
It can also be useful to identify differences among data in a collection, especially if those differences can serve as unique identifiers. Fishing a single object out of a collection by a known identifier is a pretty common scenario. Done manually, this would require looping over each element in the collection (perhaps with a while or for loop) and returning the first that possesses a matching unique identifier.
Imagine an airline web site on which a customer selects departure and destination airports. The user chooses each airport via drop-down menus and is then shown additional data about each airport. This additional data is loaded from airport objects in an array. The values chosen in each drop-down menu are the unique airport codes, which are then used by the application to find the full, detailed airport objects.
The indexBy() function behaves a bit like groupBy(), except that each object has a unique value for the indexed property, so the final result is an object whose keys (which must be unique) are the values of each object for a specified property and whose values are the objects that possess each property. In Listing 12-6 the keys for the indexed object are each airport code, and the values are the corresponding airport objects.
Keeping an indexed object with relatively stable reference data in memory is a fundamental caching practice. It incurs a one-time performance penalty (the indexing process) to avoid multiple iteration penalties (having to traverse the array each time an object is needed).
Developers often extract wanted data, or omit unwanted data, from collections and objects. This might be done for legibility (when data will be shown to a user), for performance (when data is to be sent over a network connection), for privacy (when data returned from an object or module’s API should be sparse), or for some other purpose.
Underscore has a number of utility functions that select one or more elements from a collection of objects based on some criteria. In some circumstances, this criteria may be a function that evaluates each element and returns true or false (whether the element “passes” the criteria test). In other circumstances, the criteria may be a bit of data that will be compared to each element (or a part of each element) for equality, the success or failure of which determines whether the element “matches” the criteria used.
The where() function is similar to filter() but uses the comparison criteria approach instead. Its first argument is an array of objects, but its second argument is a criteria object whose keys and values will be compared to the keys and values of each element in the array. If an element contains all the keys and corresponding values in the criteria object (using strict equality), the element will be included in the array returned by where().
The Underscore functions covered up to this point all filter larger collections into focused, smaller ones (or even a single object) when a portion of data is unnecessary to the application. Objects are also collections of data, indexed by string keys instead of ordered numbers; and like arrays, filtering data in individual objects can be quite useful.
While pluck() is quite useful for selecting individual properties from objects, it only operates on collections and is not very useful for dealing with individual objects.
Underscore contains a number of utility functions that are frequently used together to create transformation pipelines for data. To begin a chain, an object or collection is passed to Underscore’s chain() function . This returns a chain wrapper on which many Underscore functions may be called in a fluent manner, each compounding the effects of the preceding function call.
cloneDeep() recursively clones the array and all objects and their properties. In step 2 the array data is actually modified, so the array is cloned to preserve its original state.
map(function to12HourFormat() {/*...*/}) iterates over each item in the cloned array and replaces the second 24-hour number in the hours array with its 12-hour equivalent.
filter(function filterByHour() {/*...*/}) iterates over each modified coffee shop and evaluates its hours based on the period ('AM' or 'PM') specified: the first element for the opening hour and the second for the closing hour. The function returns true or false to indicate whether the coffee shop should be retained or dropped from the results.
map(function toShopName() {/*...*/}) returns the name of each remaining coffee shop in the collection. The result is an array of strings that will be passed to any subsequent steps in the chain.
Finally, value() is called to terminate the chain and return the final result: the array of names of coffee shops that are open during the hour and period provided to whatIsOpen() (or an empty array if none match the criteria).
Chains can be created with any initial value, though object and array are the most typical starting points.
Any Underscore function that operates on a value is available as a chained function.
The return value of a chained function becomes the input value of the next function in the chain.
The first argument of a chained function is always the value on which it operates. For example, Underscore’s map() function normally accepts two arguments, a collection and a callback, but when invoked as a chained function, it only accepts a callback. This pattern holds for all chained functions.
Always invoke the value() function to terminate a chain and retrieve its final, manipulated value. If a chain does not return a value, this is unnecessary.
The times() function takes a number as its first argument and a callback to be invoked for each decremented value of that number. In this example, the callback makeLyrics() will be invoked starting with the number 99 (not 100) and ending with the number 0, for 100 total iterations. For each invocation, one refrain of “99 Bottles” is returned. This creates an array of strings, which is then passed to the next function in the chain.
Functions execute when they are scheduled on JavaScript’s internal event loop. Native functions like setTimeout(), setInterval(), and Node’s setImmediate() give developers a degree of control over when these functions run—which turn of the event loop will handle their invocations. Underscore augments these primitives with a number of control functions that add flexibility to function scheduling.
Underscore’s defer() function mimics the behavior of setImmediate() in a Node.js environment; which is to say, defer() schedules a function to execute on the next immediate turn of the event loop. This is equivalent to using setTimeout() with a delay of 0. Since setImmediate() is not a JavaScript standard function, using Underscore’s defer() in both browser and server environments can provide a greater degree of consistency than polyfilling setImmediate() in the browser.
The example code in Listing 12-19 demonstrates the value of defer() in a user interface. It loads a large data set of playing card information for the popular card game Dominion, then populates an HTML table with card details.
“Debouncing” is the practice of ignoring duplicate invocations, requests, messages, and so forth in a system for some period of time. In JavaScript, debouncing a function can be very helpful if a developer anticipates that duplicate, identical function calls may be made in quick succession. A common scenario for a debounced function, for example, is preventing a form’s submit handler from being called more than once when a user accidentally clicks a Submit button multiple times on a web page.
In Listing 12-20 an onClick() function is created by invoking debounce(). The first argument to debounce() is the function that will actually be run once all duplicate invocations have stopped. The second argument is a duration, in milliseconds, that must elapse between invocations for the callback to finally be triggered. For example, if a user clicks the #submit button once, and then clicks it again within the 300-millisecond time span, the first invocation is ignored and the wait timer is restarted. Once the wait period has timed out, the debounce() callback will be invoked, alerting the user that the click has been handled.
Each time a debounced function is invoked, its internal timer is reset. The specified time span represents the minimum time that must pass between the last invocation and its preceding invocation (if any) before the callback function executes.

A debounced function invoked multiple times
The debounced function’s callback will receive any arguments passed to the debounce() function itself. For example, in Listing 12-20, jQuery’s event object e is forwarded to the debounced function’s callback. While each invocation may pass different arguments, it is important to realize that only the arguments passed during the last invocation within the wait period will actually be forwarded to the callback. The debounce() function receives an optional third, immediate parameter which may be true or false. Setting this parameter to true will invoke the callback for the first invocation instead, ignoring all subsequent duplicates for the wait period. If the arguments passed to the debounced function vary, capturing the first parameters passed instead of the last might be strategically beneficial.
Underscore’s throttle() function is similar to debounce(). It ignores subsequent invocations of a function for a specified period of time, but does not reset its internal timer with each function call. It effectively ensures that only one invocation happens during a specified period, whereas debounce() guarantees that only one invocation will happen sometime after the last invocation of a debounced function. Throttling a function can be particularly useful if a function is likely to be called many times with the same arguments, or when the granularity of the arguments is such that it is not useful to account for every invocation of the function.
The in-memory JavaScript message bus, postal.js, is a useful library for routing messages through an application. Some application modules send messages at a frequency that might not be useful for human consumption, so any function that displays these messages to a user might be a good candidate for throttling.

A throttled function invoked multiple times
Underscore offers a micro-templating system that compiles a template string (typically HTML) into a function. When this function is invoked with some data, it uses the template string’s binding expressions to populate the template, returning a new HTML string. Developers who have used templating tools like Mustache or Handlebars will be familiar with this process. Unlike these more robust templating libraries, however, Underscore’s templates have a much smaller feature set and no real template extension points. Underscore can be a strong choice as a template library when the templates in an application are fairly trivial and you have no desire or need to incur the overhead of a template-specific library in an application.
Gator tags come in three varieties. The tags used in Listing 12-22 generate safe HTML output by escaping any HTML tag sequences. If the movie synopsis contained an HTML tag such as <strong>, it would be converted to <strong>. In contrast, the gator tag <%= may be used to output unescaped strings with HTML markup intact. The third gator tag is the JavaScript evaluation tag, and it simply begins with <% (more on this tag will be covered in a bit). All gator tags share the same closing tag, %>.
Once a template string is compiled to a function, it may be invoked any number of times with different data to produce different rendered markup. It is common for applications to compile template strings into functions during page load (or during application startup, if Node.js is the runtime environment), then call each as needed during the lifetime of the application. If template strings do not change, there is no need to recompile them.
Many templating libraries include shorthand tags for common templating chores like iterating over a collection. To keep its templating system thin, Underscore forgoes syntactical sugar and, instead, allows developers to write template loops in plain, valid JavaScript.
Generally it is bad practice to perform calculations in a template (the application’s “view”). Instead, the actual calculated value should be part of the data passed to the compiled template function. Listing 12-25 should be considered for demonstration purposes only.
Gator tags can be a bit unruly in nontrivial templates. Fortunately, Underscore allows developers to change the syntax of template tags with regular expressions. Setting the templateSettings property on the Underscore object to a hash of key/value settings alters the behavior of Underscore for the lifetime of your page (or Node.js process), and affects all rendered templates.
Any markup compiled by the template system must now support the specified Mustache syntax. Templates that still contain gator tags will not be rendered correctly.
Setting | Template Syntax | Regular Expression |
|---|---|---|
evaluate | {{ ... }} | /{{(.+?)}}/g |
interpolate | {{= ... }} | /{{=(.+?)}}/g |
escape | {{- ... }} | /{{-(.+?)}}/g |
The variable property may be set in Underscore’s global settings. However, giving variables good and relevant names is important, so it makes more sense to name a variable according to its context. Instead of defining some generic variable like data or item, the examples in this section use the variable name movie and apply it by passing a settings object to template() when the movie template is compiled.
If multiple default objects are passed to defaults(), they are evaluated from first to last. Once a missing property is found on a default object, it will be ignored on any following default objects.
Modern and future implementations of ECMAScript have given developers a great many utility functions on primitive types like String, Array, Object, and Function. Unfortunately, the world moves faster than specifications come to fruition so libraries like Underscore and Lodash occupy the intersection of developer needs and language maturity.
With over 100 utility functions and a micro-templating system, Underscore enables developers to manipulate, transform, and render data in objects and collections. Underscore can be used in browser and server environments and has no dependencies. It can be added to a web page with a simple script tag or imported as an AMD or CommonJS module. Popular package managers like Bower, npm, component, and NuGet can all download prebuilt Underscore packages for a developer’s platform of choice.
Underscore’s strong feature set and ubiquitous availability make it an ideal and unobtrusive Swiss Army knife for JavaScript projects.
Underscore: http://underscorejs.org/
Lodash: https://lodash.com/
So far in this book, we have covered a diverse selection of JavaScript frameworks. Many of these frameworks serve a specific purpose and cater to a particular niche. Many others, on the other hand, have a more diversified plethora of functions and can perform various tasks and actions.
Similarly, many JavaScript frameworks that we have talked about during the course of this book have a smaller user base and community. On the contrary, some of the JavaScript frameworks mentioned in these pages are really popular with a large user base and dedicated community.
In this chapter, we will be turning our attention toward another rather popular JavaScript framework that has risen to fame in a comparatively smaller amount of time—React.
Speaking purely in textbook terms, React is not even a proper “framework” per se. Instead, it is a JavaScript library meant for building user interfaces.
However, owing to its sheer popularity, and the fact that it is now being used in projects of diverse nature, React is now almost as large and as component based as any other framework. This is why it is no longer uncommon to see React being mentioned wherever JavaScript frameworks are discussed.

React is a JavaScript library for building user interfaces
Keeping in mind that React enjoys the backing of Facebook, it has grown in stature over time. Today, React is often employed by developers to empower user interfaces of both big and small projects.
Even more so, React has grown beyond the simplified textbook definition of being a “library” or “framework.” Nowadays, React is often used in assonance with other technologies and scripting languages to power complex web apps. For instance, even though its core is in PHP, WordPress has shown a paradigm shift toward React for powering its new block-based editor (named Gutenberg) as well as its desktop apps for the WP.com hosted solution.
React was first released in 2013, and since then, it has consistently risen in terms of popularity.
React is component based. This implies that parts of the app are wrapped within self-contained and highly encapsulated modules known as Components. Furthermore, since the components’ data is written in JavaScript, it is possible for developers to pass rich data through their apps and keep the state out of the DOM.
React makes use of one-way binding and Flux control to control the application workflow. Beyond that, React components, in general, are written using JSX. This makes for more legible and easier to understand code and is also a less steep learning curve for many developers.
JSX stands for JavaScript XML, which is an extension to the JavaScript language syntax. It provides a way to write code in a manner or style that is slightly similar to HTML, making it easier for many developers to comprehend the syntax within minutes.
In React, for every DOM object, there is also a corresponding virtual DOM object. This virtual DOM is a copy or representation of the original DOM, thereby helping out with one-way binding. The DOM is updated only when a change is detected in virtual DOM—there is no need to re-render the entire page. It is worth bearing in mind that manipulating the virtual DOM is faster than actually modifying the original DOM as no data is drawn onscreen.
React can be used to create highly interactive and very dynamic user interfaces for a wide variety of purposes, such as web sites, mobile applications, and more. Developers can create simplified views for various states, and React can update and modify the relevant components as and when the state changes. Such declarative coding can save a good deal of time and effort.
Well, that is what React brings to the table. But how do we get started with React?
The first step, obviously, is to add React to our project in order to use its features. There are more than one way to do it, but the most recommended and simplest method is to add React to HTML pages by means of a <script> tag.
Adding React to our HTML pages is very simple. The first step is to add an empty <div> tag in the HTML page, right where we want the React component to appear.
The next step is to add <script> tags to the same HTML page. These should ideally be placed just before the closing </body> tag.
Lastly, we create the React component. It is noteworthy that the file name must be the same as specified in the <script> tag earlier; in our case, it will be super_react.js
The React component file will then pass on the component to the HTML. Voila! We have successfully added React to our web page, and can now start working with it.
Obviously, this was a fairly simple and theoretical example of adding React. But before seeing a React app in action, let us also cover the traditional method of installing React.
Sometimes, adding React to an HTML page by means of <script> tags may not suffice. This is especially true if we are trying to integrate React with an existing workflow, say a component library or a server-side project, and so on.
Similarly, if we are trying to build a single-page web app, using Create React App might be a better choice. This will enable us to make use of the latest React features and also set us up with an environment ideal for building Single Page Apps in React as well as learning React.

Installing Create React App using npm
Create React App requires Node.js 6.0 or higher and npm version 5.2 or hig her.

Creating a React app from the terminal using create-react-app

App successfully created
Once our application is set up, we can launch it to preview in browser.

Launching the React app

App successfully running at localhost:3000
Great, we have now successfully created a sample React application.
Now, it is time to do something more with it.
Notice that the application tells us to open src/App.js file? Well, that is the main file of the application.
We can leave the CSS classes as is and even use them in our application.
Now, let us try building a very simple to-do application in React. Our src/App.js file uses JSX notation, which we have discussed earlier.1
First, it imports the necessary components.
Then, it creates a class MyToDoList that makes use of the React component to generate a to-do handler.
Then, we render the to-do input field and the button, along with an H1 tag.
Lastly, we are exporting the result to the display.

To-do application preview

To-do application in action

src/App.js file preview
You can find the code for the to-do application over at this book’s GitHub repo.
In this chapter, we have covered what is React and what makes it different from the other JavaScript frameworks and libraries.
When it comes to React, the ecosystem is so vast that there is no dearth of learning resources or literature. Ranging from tutorials to books and even video courses, there is no shortage of good content pertaining to React development.
Since React is often used for front-end UI development, it might be a good idea to use it in assonance with a Node.js framework, such as Next.js or Sails.js for more complex projects.
React Web site: https://reactjs.org/
React Documentation: https://reactjs.org/docs/getting-started.html
React Community on reddit: www.reddit.com/r/reactjs/
Lastly, it is worth pointing out again that since React is under the aegis of the likes of Facebook, it is not very likely that this particular JavaScript library will fall out of favor anytime soon. As such, for building rich web applications and web sites, React is a very sensible choice.
So far in this book, we have covered various JavaScript frameworks that serve different purposes. Most of these JS frameworks have a more or less noticeable ecosystem and have been around for years.
But what about a newer JavaScript framework? One that is rising at a really impressive pace and, in spite of being a lesser known and relatively younger entity, is as powerful as any other framework in its league?
Yes, we are talking about Vue.js which is a progressive and really popular front-end JavaScript framework.
So now, it is time to get started with Vue. In this chapter, we will be learning about Vue.js framework, what it is about, and more importantly, what makes it special.
Furthermore, we will also be creating a simple Vue application so as to better understand the functionality and methodology of this JS framework.
First up, what exactly is Vue.js and why should we be interested in it? It is worth noting that in spite of being relatively newer to the playground, Vue.js has risen in popularity and is steadily growing in stature. Surely, that has to be something worth the effort about it, isn’t it?
So, what makes this framework tick?
Vue is a progressive JavaScript framework that is fairly tiny in size (20 KB in size, approx.).
Notice the word “progressive”? What exactly does it mean in this context? Progressive implies that the framework in question is implemented as an additional markup to existing HTML.
In other words, Vue.js being a progressive framework means it is a template model that is in turn bound to a data model, and the framework “reacts” to the model’s updates.
Vue (pronounced /vjuː/, like view) is a progressive framework for building user interfaces. It is designed from the ground up to be incrementally adoptable and can easily scale between a library and a framework depending on different use cases. It consists of an approachable core library that focuses on the view layer only and an ecosystem of supporting libraries that helps you tackle complexity in large single-page applications.
Vue has been designed especially with adaptability in mind—so much so that the core library consists of just the “view” layer (noticed the name “Vue,” which can be pronounced as “view”?). This very layer can be bound to or integrated with other libraries and projects.

Vue.js is a progressive JavaScript framework for building user interfaces
Owing to its simplicity and ease of use, as well as the comparatively easier learning curve, Vue has risen in popularity for smaller projects as well. This implies the likes of single-page web apps, which can be entirely powered by Vue.js.
Beyond that, Vue has earned a reputation for being far less opinionated than the likes of Angular and more nimble and modern than many other JS frameworks out there. This, of course, is more of an opinion-based verdict, and not everyone may find Vue to have an edge over other frameworks. With that said, very few JS frameworks or libraries have risen in popularity in this decade as Vue has. Naturally, the reputation is not without good reasons.
So, how do we get started with Vue.js?
The first step, obviously, is to install Vue in order to use it in our projects and applications.
Installing Vue.js is, basically, a no-brainer. We can choose to either include it directly with the <script> tag or go the usual way and install via npm.
If we are including Vue in our projects via the <script> tag, Vue will be registered as a global variable.
It might be a smart idea to pay attention to the version numbering and build system, as using an experimental Vue version in a production-level project can break things. The Vue.js documentation has detailed info on which build of Vue to use and when.2
The second way of installing Vue.js is to do so via npm. As we have learned by now, npm refers to the Node Package Manager. We will need to have Node.js up and running on our system in order to use npm, and if you have been following the chapters of this book so far, there are very good chances you already have Node.js and npm all set up.

Installing Vue.js using npm
Installing Vue via npm will also give us access to the Vue.js CLI (assuming we have a compatible version of Node.js on our system; as long as we have the latest build of Node, we should be fine). The minimum supported version of Node is >=8.

Vue CLI provides a set of standard tools for rapid Vue.js app development

Installing Vue CLI via npm

Creating our first Vue.js application using Vue CLI
The installer will ask us to select some options, and unless there is something custom needed, we can go with the default linter, and so on. Again, it can take some time to load all the required dependencies.
Once the project is set up, we can actually launch the app and preview in a browser.

Running our first Vue application

Previewing the Vue app in the web browser

Running Vue UI to launch the GUI for project management in Vue.js

Selecting location of project in Vue UI

Creating a new project using Vue UI in the browser
For those of us who are not the most comfortable working with the command line, using Vue UI for project creation is an ideal choice.
Nonetheless, once we are done with the setup of our first project, and having also tested and launched the app in the web browser, let us see the file structure and functioning methodology of our new app.
We will now be exploring the specific files of our new Vue application. This will help us understand how Vue.js handles its various components and how it outputs the data to the browser.
Upon navigating to the directory where we have created our new project, we will find the directory structure of the app.

Directory structure showing file hierarchy in a Vue project
Most of the preceding directories are fairly self-explanatory. For instance, the public folder contains some assets and an index.html file that is to render the app in the browser.
Let us look at the src directory in greater detail.
An assets directory, containing images and other media assets
A components directory, which, for the sample app generated earlier, contains the HelloWorld.vue file
Two files, namely, App.vue and main.js

src/main.js file of the Vue application
In the given file, we are first importing the Vue library and then the App component from App.vue.
Thereafter, we are setting the productionTip to false, so that Vue will not output “Development Mode” in the console.
Next, we are creating our Vue instance and assigning it to the DOM element that is identified by #app, so as to use the App component.
This particular file is a Single File Component, containing HTML, CSS, as well as JS code.
The CSS code of this file is self-explanatory, as it provides the styling for the code. The script tag, however, is importing a component from the components/HelloWorld.vue file. Let us, therefore, turn our attention toward the said file itself.
The components/HelloWorld.vue file might seem to be slightly larger at first, but even a slight look at its contents will be enough to comprehend the way it operates.
This file contains our HelloWorld component that is in turn included in the App component. When we preview the app in the browser, we can see that this file’s component outputs a set of links with some explanatory text and other info.
In the preceding code, it is noteworthy that CSS is “scoped.” This means any CSS added to the HelloWorld component is not global in nature and will not be applied to other components.
The message or info that this component will output is stored in the data property of the Vue instance.
So now, we have seen that the HelloWorld component is used to output the contents of our app, and with scoped attributes being set, the CSS is not leaked onto the other components.
Further, the HelloWorld component is imported by the App.vue file, and the App component itself is imported by the main.js file.
At this point, we can safely turn to the index.html file.
This is the element that the Vue application will use to attach to the DOM.
And there we have it! These are the major files that our sample app runs on. We have read and understood the way each component is handled and imported, and it might be worthwhile to refer to the app output once again here to better visualize the app’s functioning.
In this chapter, we familiarized ourselves with Vue.js, plus we also learned how to install this JS framework, what specialty it has to offer, as well as how to create a sample app.
Next, we learned about the major files and components within a Vue.js application, how its projects are handled, as well as which file or component serves a particular purpose.
To learn more about Vue.js, a good place to start might be the official documentation itself. Both the Style Guide and the API docs are fairly well laid out and detailed in nature. Beyond that, Vue.js ecosystem also has a job board and a news portal, to help developers get the most out of their skills and also stay updated with the latest insight and information.
Official Vue.js Documentation: https://vuejs.org/v2/guide/
Vue.js Cookbook: https://vuejs.org/v2/cookbook/
Vue.js Job Board: https://vuejobs.com/
Vue.js News Board: https://news.vuejs.org/
Vue.js Examples: https://vuejs.org/v2/examples/
In addition to that, it might also be worth the effort to check out the Awesome Vue repository on GitHub that shares a curated list of some of the most interesting tools and stuff related to Vue.js—you may visit the repository at https://github.com/vuejs/awesome-vue .
VuePress Homepage: https://vuepress.vuejs.org/
VuePress Intro and Installation Tutorial: https://codecarbon.com/vuepress-static-site-generator/