

Copyright © 2018 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Acquisitions Editor: Koushik Sen
Content Development Editors: Tanmayee Patil, Rutuja Yerunkar
Production Coordinator: Ratan Pote
First published: July 2018
Production reference: 1230718
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-78953-966-0

Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at service@packtpub.com for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
Anthony Nandaa is a senior software developer with more than 7 years of professional programming experience. He was introduced to programming 3 years before his career as a developer began, working with Pascal and VB 6. In his career so far, he has worked with multiple languages, such as Python, PHP, Go, and full-stack JavaScript.
In his current role, he leads a team of engineers working with Node.js and React for frontend development. He considers himself a lifelong learner, and lately, he has been learning Haskell for fun and to gain some insight into pure functional programming.
Sam Anderson is an electronic engineer turned developer currently working for an award-winning creative digital agency based in Sheffield, England. Having moved from the world of hardware design, he is passionate about creating fast, beautiful, and efficient frontend applications. You can follow him on Twitter at @andomain.
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
Using the same framework to build both server and client-side applications saves you time and money. This book teaches you how you can use JavaScript and Node.js to build highly scalable APIs that work well with lightweight cross-platform client applications. It begins with the basics of Node.js in the context of backend development, and quickly leads you through the creation of an example client that pairs up with a fully authenticated API implementation.
This book balances theory and exercises, and contains multiple open-ended activities that use real-life business scenarios for you to practice and apply your newly acquired skills in a highly relevant context.
We have included over 20 practical activities and exercises across 9 topics to reinforce your learning. By the end of this book, you'll have the skills and exposure required to get hands-on with your own API development project.
This book is ideal for developers who already understand JavaScript and are looking for a quick no-frills introduction to API development with Node.js. Though prior experience with other server-side technologies such as Python, PHP, ASP.NET, Ruby will help, it's not essential to have a background in backend development before getting started.
Chapter 1, Introduction to Node.js, covers a few fundamental concepts in Node.js, basic Node.js code and run it from the Terminal, module system, its categories, and asynchronous programming model that is at the heart of how Node.js works, and what actually makes Node.js tick.
Chapter 2, Building the API – Part 1, covers building a basic HTTP server, setting up Hapi.js, building basic API with Hapi.js Framework, and fundamental concepts of web applications.
Chapter 3, Building the API – Part 2, covers introduction to Knex.js and how we can use it to connect and use the database, essential CRUD database methods, API authentication using the JWT mechanism, CORS mechanism, testing the API using Lab library, and test automation using Gulp.js.
You can download the example code files for this book from your account at www.packtpub.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
The code bundle for the book is also hosted on GitHub at https://github.com/TrainingByPackt/BeginningAPIDevelopmentwithNode.js. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
There are a number of text conventions used throughout this book.
CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "After this setup is done, we then start the server using the server.start method."
A block of code is set as follows:
handler: (request, reply) =>
{
return reply({ message: 'hello, world' });
}
Any command-line input or output is written as follows:
node server.js
Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Change the request type to POST."
Activity: These are scenario-based activities that will let you practically apply what you've learned over the course of a complete section. They are typically in the context of a real-world problem or situation.
Feedback from our readers is always welcome.
General feedback: Email feedback@packtpub.com and mention the book title in the subject of your message. If you have questions about any aspect of this book, please email us at questions@packtpub.com.
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at copyright@packtpub.com with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packtpub.com.
This chapter is designed to cover a few fundamental concepts in Node.js, as we lay a foundation for our subsequent chapters on API development.
Let's start this first chapter with a quick dive into how Node.js works and where it's being used lately. We will then have a look at its module system and its asynchronous programming model. Let's get started.
By the end of this chapter, you will be able to:
Node.js is an event-driven, server-side JavaScript environment. Node.js runs JS using the V8 engine developed by Google for use in their Chrome web browser. Leveraging V8 allows Node.js to provide a server-side runtime environment that compiles and executes JS at lightning speeds.
Node.js runs as a single-threaded process that acts upon callbacks and never blocks on the main thread, making it high-performing for web applications. A callback is basically a function that is passed to another function so that it can be called once that function is done. We will look into this in a later topic. This is known as the single-threaded event loop model. Other web technologies mainly follow the multithreaded request-response architecture.
The following diagram depicts the architecture of Node.js. As you can see, it's mostly C++ wrapped by a JavaScript layer. We will not go over the details of each component, since that is out of the scope of this chapter.

Node's goal is to offer an easy and safe way to build high-performance and scalable network applications in JavaScript.
Node.js has the following four major applications:
Before You Begin
Open the IDE and the Terminal to implement this solution.
Aim
Learn how to write a basic Node.js file and run it.
Scenario
You are writing a very basic mathematical library with handy mathematical functions.
Steps for Completion
mkdir -p beginning-nodejs/lesson-1/activity-a
console.log(add(10, 6)); // 16
console.log(sum(10, 5, 6)); // 21
node activity-a/math.js
The 16 and 21 values should be printed out on the Terminal.
Let's have a look at Node's module system and the different categories of the Node.js modules.
Like most programming languages, Node.js uses modules as a way of organizing code. The module system allows you to organize your code, hide information, and only expose the public interface of a component using module.exports.
Node.js uses the CommonJS specification for its module system:
Let's look at a simple example:
// math.js file
function add(a, b)
{
return a + b;
}
…
…
module.exports =
{
add,
mul,
div,
};
// index.js file
const math = require('./math');
console.log(math.add(30, 20)); // 50
We can place Node.js modules into three categories:
As mentioned earlier, these are modules that can be used straight-away without any further installation. All you need to do is to require them. There are quite a lot of them, but we will highlight a few that you are likely to come across when building web applications:
For example, the following code reads the content of the lesson-1/temp/sample.txt file using the in-built fs module:
const fs = require('fs');
let file = `${__dirname}/temp/sample.txt`;
fs.readFile(file, 'utf8', (err, data) =>
{
if (err) throw err;
console.log(data);
});
The details of this code will be explained when we look at asynchronous programming later in this chapter.
Node Package Manager (npm) is the package manager for JavaScript and the world's largest software registry, enabling developers to discover packages of reusable code.
To install an npm package, you only need to run the command npm install <package-name> within your project directory. We are going to use this a lot in the next two chapters.
Let's look at a simple example. If we wanted to use a package (library) like request in our project, we could run the following command on our Terminal, within our project directory:
npm install request
To use it in our code, we require it, like any other module:
const request = require('request');
request('http://www.example.com', (error, response, body) =>
{
if (error) console.log('error:', error); // Print the error if one occurred
else console.log('body:', body); // Print the HTML for the site.
});
It's worth noting how Node.js goes about resolving a particular required module. For example, if a file /home/tony/projects/foo.js has a require call require('bar'), Node.js scans the filesystem for node_modules in the following order. The first bar.js that is found is returned:
Node.js looks for node_moduels/bar in the current folder followed by every parent folder until it reaches the root of the filesystem tree for the current file.
Let's dive a little deeper into npm, by looking at some of the handy npm commands that you will often use:
We have already looked at how local modules are loaded from the previous example that had math.js and index.js.
Since JavaScript Object Notation (JSON) is such an important part of the web, Node.js has fully embraced it as a data format, even locally. You can load a JSON object from the local filesystem the same way you load a JavaScript module. During the module loading sequence, whenever a file.js is not found, Node.js looks for a file.json.
See the example files in lesson-1/b-module-system/1-basics/load-json.js:
const config = require('./config/sample');
console.log(config.foo); // bar
Here, you will notice that once required, the JSON file is transformed into a JavaScript object implicitly. Other languages will have you read the file and perhaps use a different mechanism to convert the content into a data structure such as a map, a dictionary, and so on.
Before You Begin
This activity will build upon the, Running Basic Node.js activity of this chapter.
Aim
If the argument is a single array, sum up the numbers, and if it's more than one array, first combine the arrays into one before summing up. We will use the concat() function from lodash, which is a third-party package that we will install.
Scenario
We want to create a new function, sumArray, which can sum up numbers from one or more arrays.
Steps for Completion
npm init
npm install lodash--save
Notice that we are adding the --save option on our command so that the package installed can be tracked in package.json. When you open the package.json file created in step 3, you will see an added dependencies key with the details.
const _ = require('lodash');
function sumArray()
{
let arr = arguments[0];
if (arguments.length > 1)
{
arr = _.concat(...arguments);
}
// reusing the sum function
// using the spread operator (...) since
// sum takes an argument of numbers
return sum(...arr);
}
// testing
console.log(math.sumArray([10, 5, 6])); // 21
console.log(math.sumArray([10, 5], [5, 6], [1, 3])) // 30
node index.js
You should see 21 and 30 printed out.
Let's have a look at asynchronous programming model that is at the heart of how Node.js works.
Callbacks are functions that are executed asynchronously, or at a later time. Instead of the code reading top to bottom procedurally, asynchronous programs may execute different functions at different times based on the order and speed of earlier functions.
Since JavaScript treats functions like any other object, we can pass a function as an argument in another function and alter execute that passed-in function or even return it to be executed later.
We saw such a function previously when we were looking at the fs module in The Module System section. Let's revisit it:
const fs = require('fs');
let file = `${__dirname}/temp/sample.txt`;
fs.readFile(file, 'utf8', (err, data) =>
{
if (err) throw err;
console.log(data);
});
On line 3, we use a variable part of the globals, _ _dirname, which basically gives us the absolute path of the directory (folder) in which our current file (read-file.js) is, from which we can access the temp/sample.txt file.
Our main point of discussion is the chunk of code between lines 5 and 8. Just like most of the methods you will come across in Node.js, they mostly take in a callback function as the last argument.
Most callback functions will take in two parameters, the first being the error object and the second, the results. For the preceding case, if file reading is successful, the error object, err, will be null and the contents of the file will be returned in the data object.
Let's break down this code for it to make more sense:
const fs = require('fs');
let file = `${__dirname}/temp/sample.txt`;
const callback = (err, data) =>
{
if (err) throw err;
console.log(data);
};
fs.readFile(file, 'utf8', callback);
Now, let's look at the asynchronous part. Let's add an extra line to the preceding code:
const fs = require('fs');
let file = `${__dirname}/temp/sample.txt`;
const callback = (err, data) =>
{
if (err) throw err;
console.log(data);
};
fs.readFile(file, 'utf8', callback);
console.log('Print out last!');
See what we get as a print out:
Print out last!
hello,
world
How come Print out last! comes first? This is the whole essence of asynchronous programming. Node.js still runs on a single thread, line 10 executes in a non-blocking manner and moves on to the next line, which is console.log('Print out last!'). Since the previous line takes a long time, the next one will print first. Once the readFile process is done, it then prints out the content of file through the callback.
Promises are an alternative to callbacks for delivering the results of an asynchronous computation. First, let's look at the basic structure of promises, before we briefly look at the advantages of using promises over normal callbacks.
Let's rewrite the code above with promises:
const fs = require('fs');
const readFile = (file) =>
{
return new Promise((resolve, reject) =>
{
fs.readFile(file, 'utf8', (err, data) =>
{
if (err) reject(err);
else resolve(data);
});
});
}
// call the async function
readFile(`${__dirname}/../temp/sample.txt`)
.then(data => console.log(data))
.catch(error => console.log('err: ', error.message));
This code can further be simplified by using the util.promisify function, which takes a function following the common Node.js callback style, that is, taking an (err, value) => … callback as the last argument and returning a version that returns promises:
const fs = require('fs');
const util = require('util');
const readFile = util.promisify(fs.readFile);
readFile(`${__dirname}/../temp/sample.txt`, 'utf8')
.then(data => console.log(data))
.catch(error => console.log('err: ', error));
From what we have seen so far, promises provide a standard way of handling asynchronous code, making it a little more readable.
What if you had 10 files, and you wanted to read all of them? Promise.all comes to the rescue. Promise.all is a handy function that enables you to run asynchronous functions in parallel. Its input is an array of promises; its output is a single promise that is fulfilled with an array of the results:
const fs = require('fs');
const util = require('util');
const readFile = util.promisify(fs.readFile);
const files = [
'temp/sample.txt',
'temp/sample1.txt',
'temp/sample2.txt',
];
// map the files to the readFile function, creating an
// array of promises
const promises = files.map(file => readFile(`${__dirname}/../${file}`, 'utf8'));
Promise.all(promises)
.then(data =>
{
data.forEach(text => console.log(text));
})
.catch(error => console.log('err: ', error));
This is one of the latest additions to Node.js, having been added early in 2017 with version 7.6, providing an even better way of writing asynchronous code, making it look and behave a little more like synchronous code.
Going back to our file reading example, say you wanted to get the contents of two files and concatenate them in order. This is how you can achieve that with async/await:
const fs = require('fs');
const util = require('util');
const readFile = util.promisify(fs.readFile);
async function readFiles()
{
const content1 = await readFile(`${__dirname}/../temp/sample1.txt`);
const content2 = await readFile(`${__dirname}/../temp/sample2.txt`);
return content1 + '\n - and - \n\n' + content2;
}
readFiles().then(result => console.log(result));
In summary, any asynchronous function that returns a promise can be awaited.
Before You Begin
You should have already gone through the previous activities.
Aim
Read the file (using fs.readFile), in-file.txt, properly case format the names (using the lodash function, startCase), then sort the names in alphabetical order and write them out to a separate file out-file.txt (using fs.writeFile).
Scenario
We have a file, in-file.txt, containing a list of peoples' names. Some of the names have not been properly case formatted, for example, john doe should be changed to John Doe.
Steps for Completion
In Lesson-1, create another folder called activity-c.
npm init
The solution files are placed at Code/Lesson-1/activitysolutions/activity-c.
In this chapter, we went through a quick overview of Node.js, seeing how it looks under the hood.
We wrote basic Node.js code and ran it from the Terminal using the Node.js command.
We also looked at module system of Node.js, where we learnt about the three categories of Node.js modules, that is, in-built, third-party (installed from the npm registry), and local modules, and their examples. We also looked at how Node.js resolves a module name whenever you require it, by searching in the various directories.
We then finished off by looking at the asynchronous programming model that is at the heart of how Node.js works, and what actually makes Node.js tick. We looked at the three main ways you can write asynchronous code: using callbacks, Promises, and the
new async/await paradigm.
The foundation is now laid for us to go ahead and implement our API using Node.js. Most of these concepts will crop up again as we build our API.
This chapter is meant to introduce the students to API building using Node.js. We will start by building a basic HTTP server to gain an understanding of how Node.js works.
By the end of this chapter, you will be able to:
Let's begin by looking at the basic building blocks of a Node.js web application. The built-in http module is the core of this. However, from the following example, you will also appreciate how basic this can be.
Save the following code in a file called simple-server.js:
const http = require('http');
const server = http.createServer((request, response) =>
{
console.log('request starting...');
// respond
response.write('hello world!');
response.end();
});
server.listen(5000);
console.log('Server running at http://127.0.0.1:5000');
Now, let's run the file:
node simple-server.js
When we go to the browser and visit the URL in the example, this is what we get:

Hapi.js (HTTP API), is a rich framework for building applications and services, focusing on writing reusable application logic. There are a number of other frameworks; notable among them is Express.js. However, from the ground up, Hapi.js is optimized for API building, and we will see this shortly when building our application.
In this exercise, we're going to build a basic HTTP server like the one before, but now with Hapi.js. You will notice how most of the things are done for us under the hood with Hapi.js. However, Hapi.js is also built on top of the http module.
For the rest of the exercises, from the first exercise of Chapter 3, Building the API – Part 2, we will be building on top of each exercise as we progress. So, we might need to go back and modify previous files and so forth:
npm init -y
npm install hapi --save
const Hapi = require('hapi');
// create a server with a host and port
const server = new Hapi.Server();
server.connection
({
host: 'localhost',
port: 8000,
});
// Start the server
server.start((err) =>
{
if (err) throw err;
console.log(`Server running at: ${server.info.uri}`);
});
Let us try to understand the code:
node server.js
Server running at: http://localhost:8000
You should see something similar to this at http://localhost:8000:

Here, we're saying, if the port is provided as the first argument of the script, use that, otherwise, use 8000 as the port number. Now, when you run: node server.js 8002, the server should run okay from localhost:8002.
For us to utilize the client to the fullest, to be able to do all the request types (GET, POST, UPDATE, and so on), we will need to have an API client. There are a number out there, but we recommend either Postman (https://www.getpostman.com/) or Insomnia (https://insomnia.rest/). For our examples, we will be using Insomnia.
After installing Insomnia, add a GET request to http://localhost:8000:

Enter a name for the new request:


As we are now building our API, we need a formal way of representing our data in our request, by sending or receiving it. JavaScript Object Notation (JSON) is the conventional data-interchange format for REST APIs.
One thing to note about JSON is that it started from JavaScript and is now widely adopted across other languages. So, when it comes to Node.js, you will see how using JSON becomes so easy and natural.
handler: (request, reply) =>
{
return reply({ message: 'hello, world' });
}
node server.js
{
"message": "hello, world"
}
This comes out-of-the-box in Hapi.js, while with some frameworks, such as Express.js, you have to use a json function to do the conversion.
You will have noticed that, after making the changes in first exercise, we had to go back and stop the server and start over again. Doing this every time you make a change to your code becomes very cumbersome. Luckily, tooling comes to our rescue.
There is a Node.js package called nodemon, which can help restart the server automatically whenever there is a change in our files.
In this exercise, we're going to introduce a Node module known as nodemon, which we will be using to run our web server. This makes it possible for the server to automatically reload when we make changes to it, therefore avoiding the tediousness of stopping the server and starting it over again manually whenever we make changes to our server:
npm install --global nodemon
nodemon server.js
You should get something like this:
[nodemon] 1.12.1
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: *.*
[nodemon] starting `node server.js`
Server running at: http://localhost:8000
Logging is a very important component of any web application. We need a way of preserving the history of the server so that we can come back any time and see how it was serving requests.
And, most of all, you don't want logging to be an afterthought, only being implemented after you come across a production bug that makes your web app crash when you are trying to figure out where the problem is exactly.
Hapi.js has a minimal logging functionality built in, but if you need an extensive one, a good example is called good (https://github.com/hapijs/good).
In this exercise, we're going to add a logging mechanism on the web server we have created, so that each request and server activity can be easily tracked through the logs:
npm install --save good good-console
const Hapi = require('hapi');
const good = require('good');
// set up logging
const options = {
ops: {
interval: 100000,
},
reporters: {
consoleReporters: [
{ module: 'good-console' },
'stdout',
…
});
171102/012027.934, [ops] memory: 34Mb, uptime (seconds):
100.387, load: [1.94580078125,1.740234375,1.72021484375]
171102/012207.935, [ops] memory: 35Mb, uptime (seconds):
200.389, load: [2.515625,2.029296875,1.83544921875]
...
171102/012934.889, [response] http://localhost:8000: get /{} 200 (13ms)
Let's have a look at the concept of request and the different HTTP request methods.
Having set up our server, we are ready to start building our API. The routes are basically what constitute the actual API.
We will first look at HTTP request methods (sometimes referred to as HTTP verbs), then apply them to our API using a simple todo list example. We will look at five major ones:
In the following exercises, we're going to rewrite our previous code where we had hardcoded our data so that we can work with real and dynamic data coming directly from the database.
const todoList = [
{
title: 'Shopping',
dateCreated: 'Jan 21, 2018',
list: [
{
text: 'Node.js Books', done: false },
...
]
},
{
];
const routes = {};
routes.todo = require('./routes/todo')
// create a server with a host and port
const server = new Hapi.Server();
server.connection(
{
host: 'localhost',
port: process.argv[2] || 8000,
});
server.route(routes.todo);

module.exports = [
{
method: 'GET',
path: '/todo',
...
handler: (request, reply) => {
const id = request.params.id - 1;
// since array is 0-based index
return reply(todoList[id]);
}
},
];

module.exports = [
// previous code
{
method: 'POST',
path: '/todo',
handler: (request, reply) => {
const todo = request.payload;
todoList.push(todo);
return reply({ message: 'created' });
…
];



{
"message": "created"
}
[
...
{
"title": "Languages to Learn",
"dateCreated": "Mar 2, 2018",
"list":
[
"C++",
"JavaScript"
]
}
]
{
method: 'PUT',
path: '/todo/{id}',
handler: (request, reply) => {
const index = request.params.id - 1;
// replace the whole resource with the new one
todoList[index] = request.payload;
return reply({ message: 'updated' });
}
}

{
"message": "updated"
}

{
method: 'PATCH',
handler: (request, reply) =>
{
…
Object.keys(request.payload).forEach(key =>
{
if (key in todo)
{
todo[key] = request.payload[key];
…
return reply({ message: 'patched' });
},
}

{
"message": "patched"
}

{
method: 'DELETE',
path: '/todo/{id}',
handler: (request, reply) => {
const index = request.params.id - 1;
delete todoList[index]; // replaces with `undefined`
return reply({ message: 'deleted' });
},
},

{
method: 'GET',
path: '/todo/{id}',
handler: (request, reply) =>
{
const id = request.params.id - 1;
// should return 404 error if item is not found
if (todoList[id]) return reply(todoList[id]);
return reply({ message: 'Not found' }).code(404);
}
}
{
"message": "Not found"
}
We will need to validate the incoming requests to make sure that they conform to what the server can handle.
This is one of the places I see Hapi.js shining above other frameworks. In Hapi.js, you hook in validation as a configuration object as part of the route object. For validation, we will use the Joi library, which works well with Hapi.js.
In this exercise, we are going to see the concept of request validation in action. We will write a validation for one of the routes as an example, but the same could be applied across the other routes:
npm install joi --save
{
method: 'POST',
path: '/todo',
handler: (request, reply) =>
{
const todo = request.payload;
todoList.push(todo);
return reply({ message: 'created' });
},
...
},


More details on Joi can be found here: https://github.com/hapijs/joi.
This chapter has covered initial part of building our API with Node.js. We started by looking at a basic HTTP server built with only the built-in HTTP module, for us to appreciate the basic building blocks of a Node.js web application. We then introduced doing the same thing with the Hapi.js framework.
We then went through various HTTP verbs (request methods) by example as we built our basic API with Hapi.js. Those were GET, POST, PUT, PATCH, and DELETE.
We also covered some fundamental concepts of web applications, such as logging, using good and request validation, and using Joi.
This chapter is intended to revisit the previous implementation, this time saving our data in a persistent storage (database). It will also cover authentication, and unit testing and hosting as additional good-to-know concepts (but not essential). It is therefore prudent to put more emphasis on working with the DB using knex.js and authenticating your API with JWT.
By the end of this chapter, you will be able to:
In this section, we're going to go through the fundamental concepts of working with the database. We will continue with the step-by-step build-up from our previous todo project. You will have noticed that our last project, we were storing our information in computer memory, and that it disappears immediately once our server returns. In real-life, you will want to store this data persistently for later access.
So, what is Knex.js? It is a SQL query-builder for relational databases like PostgreSQL, Microsoft SQL Server, MySQL, MariaDB, SQLite3, and Oracle. Basically, with something like Knex, you can write one code that will easily work with any of the mentioned databases, with no extra effort, just switching the configurations.
Let's walk through the exercise as we explain the concepts.
Let's go back to where we left off in the Exercise 11: Validating a Request of Chapter 2, Building the API – Part 1. In this example, we will be using MySQL as our database of choice. Make sure your machine is set up with MySQL and MySQL Workbench:



CREATE DATABASE todo;



Now that we have created out database, in this exercise we are going to connect our application to our database using the necessary npm packages, that is, knex and mysql:
npm install mysql knex --save
const env = process.env.NODE_ENV || 'development';
const configs =
{
development:
{
client: 'mysql',
...
const Knex = require('knex')(configs[env]);
module.exports = Knex;
const Knex = require('./db');
Knex.raw('select 1+1 as sum')
.catch((err) => console.log(err.message))
.then(([res]) => console.log('connected: ', res[0].sum));
node test-db.js
You should get the following printed:
connected: 2
In this exercise, we're going to write code for saving a todo and its items. To start off, let's create a dummy user since we will hardcode the user ID for our code. Later, in Exercise 19: Securing All the Routes, we will have the ID picked from the authentication details:
USE todo;
INSERT INTO 'user' ('id', 'name', 'email', 'password')
VALUES (NULL, 'Test User', 'user@example.com',
MD5('u53rtest'));

const Knex = require('../db');
{
method: 'POST',
path: '/todo',
handler: async (request, reply) =>
{
const todo = request.payload;
todo.user_id = 1; // hard-coded for now
// using array-destructuring here since the
// returned result is an array with 1 element
const [ todoId ] = await Knex('todo')
.returning('id')
.insert(todo);
...
}
},
nodemon server.js

{
method: 'POST',
path: '/todo/{id}/item',
handler: async (request, reply) =>
{
const todoItem = request.payload;
todoItem.todo_id = request.params.id;
const [ id ] = await Knex('todo_item')
.insert(todoItem);
return reply({ message: 'created', id: id });
...
},

In this exercise, we're going to write the routes for:
We will use a number Knex methods:
{
method: 'GET',
path: '/todo',
handler: async (request, reply) =>
{
const userId = 1; // hard-coded
const todos = await Knex('todo')
.where('user_id', userId);
return reply(todos);
},
},
{
method: 'GET',
path: '/todo/{id}',
...
.where({
id: id,
user_id: userId
});
if (todo) return reply(todo);
return reply({ message: 'Not found' }).code(404);
},
},
{
method: 'GET',
path: '/todo/{id}/item',
handler: async (request, reply) =>
{
const todoId = request.params.id;
const items = await Knex('todo_item')
.where('todo_id', todoId);
return reply(items);
},
},

In this exercise, we're going to write routes for updating a todo title or a todo item, and here we will introduce a new Knex method, .update():
{
method: 'PATCH',
path: '/todo/{id}',
...
title: Joi.string().required(),
}
}
}
},

{
method: 'PATCH',
path: '/todo/{todo_id}/item/{id}',
handler: async (request, reply) =>
{
const itemId = request.params.id;
...
payload:
{
text: Joi.string(),
done: Joi.boolean(),
}
...
},



In this exercise, we will be introducing the last vital Knex method to complete our Create, Read, Update, Delete (CRUD) journey, .delete():
{
method: 'DELETE',
path: '/todo/{todoId}/item/{id}',
handler: async (request, reply) =>
{
const id = request.params.id;
const deleted = await Knex('todo_item')
.where('id', id)
.delete();
return reply({ message: 'deleted' });
},
},


Now that we have almost updated all our routes that we had from Chapter 2, Building the API – Part 1, let's now remove all the code that is no longer needed:
const todoList = [
...
];
{
method: 'PUT',
path: '/todo/{id}',
handler: (request, reply) =>
{
const index = request.params.id - 1;
// replace the whole resource with the new one
todoList[index] = request.payload;
return reply({ message: 'updated' });
},
},
{
method: 'DELETE',
path: '/todo/{id}',
handler: async (request, reply) =>
{
const id = request.params.id;
const deleted = await Knex('todo')
.where('id', id)
.delete();
return reply({ message: 'deleted' });
},
},
So far, we have been using our API without any authentication. This means that if this API is hosted at a public place, anyone can access any of the routes, including deleting all our records! Any proper API needs authentication (and authorization). Basically, we need to know who is doing what, and if they are authorized (allowed) to do that.
JSON Web Tokens (JWT) is an open, industry standard method for representing claims securely between two parties. Claims are any bits of data that you want someone else to be able to read and/or verify but not alter.
To identify/authenticate users for our API, the user puts a standard-based token in the header (with the Authorization key) of the request (prefixing it with the word Bearer). We will see this practically in a short while.
In this exercise, we're going to secure all the /todo/* routes that we created so that no unauthenticated user can access them. In the Exercise 21: Implementing Authorization, we will differentiate between an unauthenticated and an unauthorized user:
npm install hapi-auth-jwt --save
const hapiAuthJwt = require('hapi-auth-jwt');
server.register(hapiAuthJwt, (err) =>
{
server.auth.strategy('token', 'jwt',
{
key: 'secretkey-hash',
verifyOptions:
{
algorithms: [ 'HS256' ],
...
// add auth config on all routes
...
});

Now that we have secured all our todo routes, we need a way to issue tokens to valid users to access the API. We will have the users send their email and password to a route (/auth), and our API will issue back an authentication token which will be used for each request:
npm install jsonwebtoken md5 --save
const jwt = require('jsonwebtoken');
const Joi = require('joi');
const md5 = require('md5');
const Knex = require('../db');
module.exports =
{
method: 'POST',
path: '/auth',
...
};
routes.auth = require('./routes/auth');
server.route(routes.auth);




const userId = request.auth.credentials.id;
INSERT INTO 'user' ('id', 'name', 'email', 'password')
VALUES (NULL, 'Another User', 'another@example.com',
MD5('12345'));





Oops! We can see someone else's todo list items; this is a security flaw. This leads us to the final part of this topic, authorization.
Through authentication, we get to know who is accessing our API; through authorization, we get to tell who can access what, within our API.
In this exercise, we are going to refine our API to make sure that users are only authorized to access their todos and todo items:
{
method: 'GET',
path: '/todo/{id}/item',
handler: async (request, reply) =>
{
const todoId = request.params.id;
...
return reply(items);
},
},

Cross-Origin Resource Sharing (CORS) is a mechanism that uses additional HTTP headers to let a user agent (browser) gain permission to access selected resources from a server on a different origin (domain) than the site currently in use. For instance, when you are hosting a web application frontend on another domain, because of browser restriction, you will not be able to access the API.
We therefore need to explicitly state that our API will allow cross-origin requests. We will modify the server.js file, at the place we were initializing the server connection, to enable CORS:
server.connection(
{
host: 'localhost',
port: process.argv[2] || 8000,
routes:
{
cors: true,
}
});
In this section, we will have a brief look at writing unit tests for Hapi.js APIs. Testing is a huge topic that perhaps requires a whole course on its own, but in this section, we will be introducing the essential parts to get you up and running.
Let's first underscore the importance of writing unit tests for your API:
Hapi.js conventionally uses Lab (https://github.com/hapijs/lab) as its testing framework. We're going to write a few tests for our API in the next exercise.
In this exercise, we will introduce the concept of writing unit tests for the Hapi.js web API, mainly using the third-party lab module and the built-in assert module. Ideally, we should have a separate database for our tests, but for the sake of simplicity here, we will share our development database for tests too:
npm install lab --save-dev
const assert = require('assert');
// lab set-up
const Lab = require('lab');
const lab = exports.lab = Lab.script();
// get our server(API)
const server = require('../server');
module.exports = server;
configs.test = configs.development;
server.connection(
{
host: 'localhost',
port: process.env.PORT || 8000,
routes:
{
cors: true,
}
});
const
{
experiment,
test,
before,
} = lab;
experiment('Base API', () =>
{
test('GET: /', () =>
{
const options =
{
...
assert.equal(response.result.message, 'hello, world');
});
});
});
PORT=8001 ./node_modules/lab/bin/lab test --leaks

"test": "echo \"Error: no test specified\" && exit 1"
"test": "PORT=8001 ./node_modules/lab/bin/lab test --leaks"
npm test
experiment('Authentication', () =>
{
test('GET: /todo without auth', () =>
{
const options =
{
method: 'GET',
url: '/todo'
};
server.inject(options, (response) =>
{
assert.equal(response.statusCode, 401);
});
});
});

install gulp gulp-shell gulp-watch --save-dev
const gulp = require('gulp');
const shell = require('gulp-shell');
const watch = require('gulp-watch');
...
gulp.task('test', shell.task('npm test'));
"scripts":
{
"test": "PORT=8001 ./node_modules/lab/bin/lab test --leaks",
"test:dev": "./node_modules/.bin/gulp test:dev"
},
npm run test:dev

experiment('/todo/* routes', () =>
{
const headers =
{
Authorization: 'Bearer ',
};
before(() =>
{
const options =
{
method: 'POST',
url: '/auth',
...
});
In this chapter, we have explored quite a lot. We started off with introducing Knex.js and how we can use it to connect and use the database. We went through the essential CRUD database methods. We then covered how we can authenticate our API and prevent it from unauthorized access, using the JWT mechanism. We also mentioned something important about CORS, how the browsers handle this and how we can enable this on our API. We finally finished off with covering concepts about testing our API, using the Lab library. We also covered, in passing, the concept of test automation using gulp.js.
In this book, we started off with learning how to implement the necessary modules to get simple applications up and running. We then moved on to implementing the async and await functions to handle asynchronous code efficiently. After a primer on Node.js (the application building aspect), we graduated to building an API using Node.js. To do this, we initially used the built-in module and then utilized the rich Hapi.js framework. We also understood the advantages of the Hapi.js framework. Later on, we learned how to handle requests from API clients and finally, we completed the book by covering interactions with databases.
This is a practical quick-start guide. To further your knowledge, you should consider building real-time applications with Node.js. We have recommended a few books in the next section, but ensure you check our website to find other books that may interest you!
If you enjoyed this book, you may be interested in these other books by Packt:
RESTful Web API Design with Node.js 10 - Third Edition
Valentin Bojinov
ISBN: 978-1-78862-332-2
Advanced Node.js Development
Andrew Mead
ISBN: 978-1-78839-393-5
Please share your thoughts on this book with others by leaving a review on the site that you bought it from. If you purchased the book from Amazon, please leave us an honest review on this book's Amazon page. This is vital so that other potential readers can see and use your unbiased opinion to make purchasing decisions, we can understand what our customers think about our products, and our authors can see your feedback on the title that they have worked with Packt to create. It will only take a few minutes of your time, but is valuable to other potential customers, our authors, and Packt. Thank you!