
BIRMINGHAM - MUMBAI

BIRMINGHAM - MUMBAI
Copyright © 2017 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: October 2017
Production reference: 1241017
ISBN 978-1-78712-648-0
|
Author
Cyrus Dasadia |
Copy Editor
Safis Editing |
|
Reviewers
Nilap Shah Ruben Oliva Ramos |
Project Coordinator
Nidhi Joshi |
|
Commissioning Editor
Amey Varangaonkar |
Proofreader
Safis Editing |
|
Acquisition Editor
Viraj Madhav |
Indexer
Aishwarya Gangawane |
|
Content Development Editor
Cheryl Dsa |
Graphics
Tania Dutta |
|
Technical Editor
Dinesh Pawar |
Production Coordinator
Shantanu Zagade |
Cyrus Dasadia has enjoyed tinkering with open source projects since 1996. He has been working as a Linux system administrator and part-time programmer for over a decade. He works at InMobi, where he loves designing tools and platforms. His love for MongoDB blossomed in 2013, when he was amazed by its ease of use and stability. Since then, almost all of his projects have been written with MongoDB as the primary backend. Cyrus is also the creator of an open source alert management system called CitoEngine. His spare time is devoted to trying to reverse-engineer software, playing computer games, or increasing his silliness quotient by watching reruns of Monty Python.
Nilap Shah is a lead software consultant with experience across various fields and technologies. He is expert in .NET, Uipath (Robotics) and MongoDB. He is certified MongoDB developer and DBA. He is technical writer as well as technical speaker. He is also providing MongoDB corporate training. Currently, he is working as lead MongoDB consultant and providing solutions with MongoDB technology (DBA and developer projects). His LinkedIn profile can be found at https://www.linkedin.com/in/nilap-shah-8b6780a/ and can be reachable +91-9537047334 on WhatsApp.
Ruben Oliva Ramos is a computer systems engineer from Tecnologico de Leon Institute, with a master's degree in computer and electronic systems engineering, teleinformatics, and networking specialization from the University of Salle Bajio in Leon, Guanajuato, Mexico. He has more than 5 years of experience in developing web applications to control and monitor devices connected with Arduino and Raspberry Pi using web frameworks and cloud services to build the Internet of Things applications.
He is a mechatronics teacher at the University of Salle Bajio and teaches students of the master's degree in design and engineering of mechatronics systems. Ruben also works at Centro de Bachillerato Tecnologico Industrial 225 in Leon, Guanajuato, Mexico, teaching subjects such as electronics, robotics and control, automation, and microcontrollers at Mechatronics Technician Career; he is a consultant and developer for projects in areas such as monitoring systems and datalogger data using technologies (such as Android, iOS, Windows Phone, HTML5, PHP, CSS, Ajax, JavaScript, Angular, and ASP.NET), databases (such as SQlite, MongoDB, and MySQL), web servers (such as Node.js and IIS), hardware programming (such as Arduino, Raspberry pi, Ethernet Shield, GPS, and GSM/GPRS, ESP8266), and control and monitor systems for data acquisition and programming.
He has authored the book Internet of Things Programming with JavaScript and Advanced Analytics with R and Tableau by Packt Publishing. He is also involved in monitoring, controlling, and the acquisition of data with Arduino and Visual Basic .NET for Alfaomega.
For support files and downloads related to your book, please visit www.PacktPub.com. Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at service@packtpub.com for more details. At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.
Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review on this book's Amazon page at https://www.amazon.com/dp/178712648X.
If you'd like to join our team of regular reviewers, you can email us at customerreviews@packtpub.com. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!
MongoDB is an extremely versatile NoSQL database that offers performance, scalability, and reliability of data. It has slowly become one of the leading NoSQL database systems used for storing extremely large datasets. In addition to this, the fact that it is open source makes it the perfect candidate for any project. From prototyping a minimal viable product to storing millions of complex documents, MongoDB is clearly emerging as the go-to database system.
This book aims to help the reader in operating and managing MongoDB systems. The contents of this book are divided into sections covering all the core aspects of administering MongoDB systems. The primary goal of this book is not to duplicate the MongoDB documentation, but to gently nudge the reader towards topics that are often overlooked when designing MongoDB systems.
Chapter 1, Installation and Configuration, covers the basic details of how to install MongoDB, either from the bundled binaries or through the operating system's package managers. It also covers configuration details, as well as how to install MongoDB in a Docker container.
Chapter 2, Understanding and Managing Indexes, gives a quick overview of the benefits of indexes, their various types, and how to optimize database responses by choosing the correct indexes.
Chapter 3, Performance Tuning, covers various topics that can help optimize the infrastructure to deliver optimal database performance. We discuss disk I/O optimization, measuring slow queries, storage considerations in AWS, and managing working sets.
Chapter 4, High Availability with Replication, shows how to achieve high availability using MongoDB replica sets. Topics such as the configuration of replica sets, managing node subscriptions, arbiters, and so on are covered.
Chapter 5, High Scalability with Sharding, covers MongoDB's high scalability aspects using shards. The topics covered in this section include setting up a sharded cluster, managing chunks, managing non-sharded data, adding and removing nodes from the cluster, and creating a geographically distributed sharded cluster.
Chapter 6, Managing MongoDB Backups, helps the reader understand how to select an optimum backup strategy for their MongoDB setup. It covers how to take backups of standalone systems, replica sets, analyzing backup files, and so on.
Chapter 7, Restoring MongoDB from Backups, shows various techniques for restoring systems from previously generated backups. Topics covered include restoring standalone systems, specific databases, the backup of one database to another database, replica sets, and sharded clusters.
Chapter 8, Monitoring MongoDB, illustrates various aspects of monitoring the health of a MongoDB setup. This chapter includes recipes for using mongostat, monitoring replica set nodes, monitoring long-running operations, checking disk I/O, fetching database metrics, and storing them in a time-series database such as Graphite.
Chapter 9, Authentication and Security in MongoDB, looks into various aspects involved in securing a MongoDB infrastructure. Topics covered in this chapter include creating and managing users, implementing role-based access models, implementing SSL/TLS-based transport mechanisms, and so on.
Chapter 10, Deploying MongoDB in Production, provides insights into deploying MongoDB in a production environment, upgrading servers to newer versions, using configuration management tools to deploy MongoDB, and using Docker Swarm to set up MongoDB in containers.
For the most part, this book requires only MongoDB 3.4 or higher. Although most of the operating system commands used throughout the book are for Linux, the semantics is generic and can be replayed on any operating system. It may be useful to have some knowledge of how MongoDB works, but for the most part, all chapters are verbose enough for beginners as well.
This book is for database administrators or site reliability engineers who are keen on ensuring the stability and scalability of their MongoDB systems. Database administrators who have a basic understanding of the features of MongoDB and want to professionally configure, deploy, and administer a MongoDB database will find this book essential. If you are a MongoDB developer and want to get into MongoDB administration, this book will also help you.
In this book, you will find several headings that appear frequently (Getting ready, How to do it…, How it works…, There's more…, and See also). To give clear instructions on how to complete a recipe, we use these sections as follows.
This section tells you what to expect in the recipe, and describes how to set up any software or any preliminary settings required for the recipe.
This section contains the steps required to follow the recipe.
This section usually consists of a detailed explanation of what happened in the previous section.
This section consists of additional information about the recipe in order to make the reader more knowledgeable about the recipe.
This section provides helpful links to other useful information for the recipe.
In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning. Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "You can view the available command line parameters by using --help or -h."
Any command-line input or output is written as follows:
ln -s data/mongodb-linux-x86_64-ubuntu1404-3.4.4/ data/mongodb
New terms and important words are shown in bold.
Feedback from our readers is always welcome. Let us know what you think about this book-what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of. To send us general feedback, simply e-mail feedback@packtpub.com, and mention the book's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors .
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files emailed directly to you. You can download the code files by following these steps:
You can also download the code files by clicking on the Code Files button on the book's web page at the Packt Publishing website. This page can be accessed by entering the book's name in the Search box. Please note that you need to be logged in to your Packt account. Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/MongoDB-Administrators-Guide. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title. To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy. Please contact us at copyright@packtpub.com with a link to the suspected pirated material. We appreciate your help in protecting our authors and our ability to bring you valuable content.
If you have a problem with any aspect of this book, you can contact us at questions@packtpub.com, and we will do our best to address the problem.
In this chapter, we will cover the following recipes:
In this chapter, we will look at how to install a standalone MongoDB server. We will also look at how to perform some useful customization to the default configuration of a MongoDB server. Lastly, we will run a MongoDB server inside a Docker container.
You will need a machine running Ubuntu 14.04 or higher, although in theory any Red Hat or Debian-based Linux distribution should be fine. You will also need to download the latest stable binary tarball from https://www.mongodb.com/download-center
ln -s /data/mongodb-linux-x86_64-ubuntu1404-3.4.4/ /data/mongodb
mkdir /data/db
/data/mongodb/bin/mongod --dbpath /data/db
2017-05-14T10:07:15.247+0000 I CONTROL [initandlisten] MongoDB starting : pid=3298 port=27017 dbpath=/data/db 64-bit host=vagrant-ubuntu-trusty-64
2017-05-14T10:07:15.247+0000 I CONTROL [initandlisten] db version v3.4.4
2017-05-14T10:07:15.248+0000 I CONTROL [initandlisten] git version: 888390515874a9debd1b6c5d36559ca86b44babd
2017-05-14T10:07:15.248+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014
2017-05-14T10:07:15.248+0000 I CONTROL [initandlisten] allocator: tcmalloc
2017-05-14T10:07:15.249+0000 I CONTROL [initandlisten] modules: none
2017-05-14T10:07:15.249+0000 I CONTROL [initandlisten] build environment:
2017-05-14T10:07:15.249+0000 I CONTROL [initandlisten] distmod: ubuntu1404
2017-05-14T10:07:15.249+0000 I CONTROL [initandlisten] distarch: x86_64
2017-05-14T10:07:15.250+0000 I CONTROL [initandlisten] target_arch: x86_64
2017-05-14T10:07:15.250+0000 I CONTROL [initandlisten] options: { storage: { dbPath: "/data/db" } }
< -- snip -- >
2017-05-14T10:07:15.313+0000 I COMMAND [initandlisten] setting featureCompatibilityVersion to 3.4
2017-05-14T10:07:15.313+0000 I NETWORK [thread1] waiting for connections on port 27017
Additionally, for convenience, we can edit the system's PATH variable to include the mongodb binaries directory. This allows us to invoke the mongodb binaries without having to type the entire path. For example, to execute the mongo client, instead of having to type /data/mongodb/bin/mongo every time, we can simply type mongo. This can be done by appending your ~/.bashrc or ~/.zshrc files for bash and zsh respectively, with the following lines:
PATH=/data/mongodb/bin:${PATH}
export PATH
We downloaded a precompiled binary package and started the mongod server using the most basic command line parameter --dbpath so that it uses a customized directory, /data/db for storing databases. As you might have noticed, the MongoDB server by default, starts listening on TCP port 27017 on all interfaces.
The mongod binary has a lot of interesting options. You can view the available command line parameters by using --help or -h. Alternatively, you can also find a detailed reference of available options, at https://docs.mongodb.com/master/reference/program/mongod/.
Just like most mature community projects, MongoDB also provides packages for formats supported by Debian/Ubuntu and Red Hat/CentOS package managers. There is extensive documentation on how to configure your operating system's package manager to automatically download the MongoDB package and install it. For more information on how to do so, see: https://docs.mongodb.com/master/administration/install-on-linux/.
Similar to the previous recipe, Installing and starting MongoDB on Linux, we will see how to set up MongoDB on a macOS operating system.
MongoDB supports macOS 10.7 (Lion) or higher, so ensure that your operating system is upgraded. Download the binary files the latest stable binary tarball from https://www.mongodb.com/download-center.
tar xvf mongodb-osx-x86_64-3.4.4.tgz
All of MongoDB's core binaries are available in the ~/data/mongodb-osx-x86_64-3.4.4/bin directory.
cd ~/data/
ln -s mongodb-osx-x86_64-3.4.4 mongodb
mkdir ~/data/db
~/data/mongodb/bin/mongod --dbpath ~/data/db
2017-05-21T15:21:20.662+0530 I CONTROL [initandlisten] MongoDB starting : pid=960 port=27017 dbpath=/Users/cyrus.dasadia/data/db 64-bit host=foo
2017-05-21T15:21:20.662+0530 I CONTROL [initandlisten] db version v3.4.4
2017-05-21T15:21:20.662+0530 I CONTROL [initandlisten] git version: 888390515874a9debd1b6c5d36559ca86b44babd
2017-05-21T15:21:20.662+0530 I CONTROL [initandlisten] allocator: system
2017-05-21T15:21:20.662+0530 I CONTROL [initandlisten] modules: none
2017-05-21T15:21:20.662+0530 I CONTROL [initandlisten] build environment:
2017-05-21T15:21:20.662+0530 I CONTROL [initandlisten] distarch: x86_64
2017-05-21T15:21:20.662+0530 I CONTROL [initandlisten] target_arch: x86_64
2017-05-21T15:21:20.662+0530 I CONTROL [initandlisten] options: { storage: { dbPath: "/Users/cyrus.dasadia/data/db" } }
<<--- snip -- >>
2017-05-21T15:21:21.492+0530 I NETWORK [thread1] waiting for connections on port 27017
PATH=~/data/mongodb/bin:${PATH}
export PATH
Similar to our first recipe, we downloaded a precompiled binary package and started the MongoDB server using the most basic command line parameter --dbpath such that it uses a customized directory ~/data/db for storing databases. As you might have noticed, MongoDB server by default, starts listening on TCP 27017 on all interfaces. We also saw how to add the MongoDB binary directory's path to our system's PATH variable for a more convenient way to access the MongoDB binaries.
As you might have observed, after starting the MongoDB server, the mongod process binds to all interfaces which may not be suitable for all use cases. For example, if you are using MongoDB for development or you are running a single node instance on the same server as your application, you probably do not wish to expose MongoDB to the entire network. You might also have a server with multiple network interfaces and may wish to have MongoDB server listen to a specific network interface. In this recipe, we will see how to start MongoDB on a specific interface and port.
Make sure you have MongoDB installed on your system as shown in the previous recipes.
mongod --dbpath /data/db
This starts the mongod daemon which binds to all network interfaces on port 27017.
mongo 192.168.1.112:27017
You should see a MongoDB shell.
mongod --dbpath /data/db --bind_ip 127.0.0.1
mongo 192.168.1.112:27017
mongo 127.0.0.1:27017
mongod --dbpath /data/db --bind_ip 127.0.0.1 --port 27000
mongo 127.0.0.1:27000
By default, the mongod daemon binds to all interfaces on TCP port 27017. By passing the IP address with the --bind_ip parameter, we instructed mongod daemon to listen only on this socket. Next we passed the --port parameter along with --bind_ip to instruct the mongod daemon to listen to a particular port and IP. Using a non-standard port is a common practice when one wishes to run multiple instances of mongod daemon (along with a different --dbpath) or wish to add a little touch security by obscurity. Either way, we will be using this practice in our later recipes to test shards and replica sets setups running on a single server.
By default, connections to MongoDB server are not encrypted. If one were to intercept this traffic, almost all the data transferred between the client and the server is visible as clear text. If you are curious, I would encourage you to use tcpdump or wireshark to capture packets between a mongod daemon and the client. As a result, it is highly advisable to make sure that you encrypt all connections to your mongod set by enabling Transport Layer Security (TLS) also commonly known as SSL.
Make sure you have MongoDB installed on your system as shown in the previous recipes.
openssl req -x509 -newkey rsa:4096 -nodes -keyout mongo-secure.key -out mongo-secure.crt -days 365
cat mongo-secure.key mongo-secure.crt > mongo-secure.pem
mongod --dbpath /data/db --sslMode requireSSL --sslPEMKeyFile /data/mongo-secure.pem
mongo localhost:27017
2017-05-13T16:51:08.031+0000 I NETWORK [thread1] connection accepted from 192.168.200.200:43441 #4 (1 connection now open)
2017-05-13T16:51:08.032+0000 I - [conn4] AssertionException handling request, closing client connection: 17189 The server is configured to only allow SSL connections
2017-05-13T16:51:08.032+0000 I - [conn4] end connection 192.168.200.200:43441 (1 connection now open)
mongo --ssl --sslAllowInvalidCertificates
In step 1, we created a self-signed certificate to get us started with SSL enabled connections. One could very well use a certificate signed by a valid Certificate Authority (CA), but for test purposes we are good with a self-signed certificate. In all honesty, if connection security is all you need, a self-signed certificate can also be used in a production environment as long as you keep the keys secure. You might as well take it a step forward by creating your own CA certificate and use it to sign your certificates.
In step 2, we concatenate the key and the certificate file. Next, in step 3, we start the mongod daemon with --sslMode requireSSL followed by providing the path to the concatenated .pem file. At this point, we have a standalone MongoDB server listening to the default port 27017, ready to accept only SSL based clients.
Next, we attempt to connect to the mongod server using the default non-SSL mode, which is immediately rejected by the sever. Finally, in step 5 we explicitly make an SSL connection by providing the --ssl parameter followed by --sslAllowInvalidCertificates. The latter parameter is used because we are using a self-signed certificate on the server. If we were using an certificate signed by a authorized CA or even a self-signed CA, we could very well use the --sslCAFile to provide the CA certificate.
MongoDB also supports X.509 certificate-based authentication as an option to username and passwords. We will cover this topic in Chapter 9, Authentication and Security in MongoDB.
Starting with MongoDB Version 3.0, a new storage engine named WiredTiger was available and very soon it became the default storage engine in version 3.2. Up until then, MMAPv1 was used as the default storage engine. I will give you a brief rundown on the main features of both storage engines and hopefully it should give you enough to decide which one suits your application's requirements.
WiredTiger provides the ability, for multiple clients, to perform write operations on the same collection. This is achieved by providing document-level concurrency such that during a given write operation, the database only locks a given document in the collection as against its predecessors, which would lock the entire collection. This drastically improves performance for write heavy applications. Additionally, WiredTiger provides compression of data for indexes and collections. The current compression algorithms used by WiredTiger are Google's Snappy and zLib. Although disabling compression is possible, one should not immediately jump this gun unless it is truly load-tested while planning your storage strategy.
WiredTiger uses Multi-Version Concurrency Control (MVCC) that allows asserting point-in-time snapshots of transactions. These finalized snapshots are written to disk which helps create checkpoints in the database. These checkpoints eventually help determine the last good state of data files and helps in recovery of data during abnormal shutdowns. Additionally, journaling is also supported with WiredTiger where write-ahead transaction logs are maintained. The combination of journaling and checkpoints increases the chance of data recovery during failures. WiredTiger uses internal caching as well as filesystem cache to provide faster responses on queries. With high concurrency in mind, the architecture of WiredTiger is such that it better utilizes multi-core systems.
MMAPv1 is quite mature and has proven to be quite stable over the years. One of the storage allocation strategies used with this engine is the power of two allocation strategy. This primarily involves storing double the amount of document space (in power of twos) such that in-place updates of documents become highly likely without having to move the documents during updates. Another storage strategy used with this engine is fixed sizing. In this, the documents are padded (for example, with zeros) such that maximum data allocation for each document is attained. This strategy is usually followed by applications that have fewer updates.
Consistency in MMAPv1 is achieved by journaling, where writes are written to a private view in memory which are written to the on-disk journal. Upon which the changes are then written to a shared view that is the data files. There is no support for data compression with MMAPv1. Lastly, MMAPv1 heavily relies on page caches and hence uses up available memory to retain the working dataset in cache thus providing good performance. Although, MongoDB does yield (free up) memory, used for cache, if another process demands it. Some production deployments avoid enabling swap space to ensure these caches are not written to disk which may deteriorate performance.
So which storage engine should you choose? Well, with the above mentioned points, I personally feel that you should go with WiredTiger as the document level concurrency itself is a good marker for attaining better performance. However, as all engineering decisions go, one should definitely not shy away from performing appropriate load testing of the application across both storage engines.
In this recipe, we will look at how to migrate existing data onto a new storage engine. MongoDB does not allow on the fly (live) migrations, so we will have to do it the hard way.
Ensure you have a MongoDB database installation ready.
/data/mongodb/bin/mongod --dbpath /data/db --storageEngine mmapv1
> var status = db.serverStatus()
> status['storageEngine']
{
"name" : "mmapv1",
"supportsCommittedReads" : false,
"readOnly" : false,
"persistent" : true
}
> use mydb
> for(var x=0; x<100; x++){
db.mycol.insert({
age:(Math.round(Math.random()*100)%20)
})
}
> db.mycol.count()
100
mkdir /data/backup
mongodump -o /data/backup --host localhost:27017
mkdir /data/newdb
/data/mongodb/bin/mongod --dbpath /data/newdb --storageEngine wiredTiger
mongorestore /data/backup/
> var status = db.serverStatus()
> status['storageEngine']
{
"name" : "wiredTiger",
"supportsCommittedReads" : true,
"readOnly" : false,
"persistent" : true
}
> use mydb
switched to db mydb
> db.mycol.count()
100
As WiredTiger is the default storage engine for MongoDB 3.2, for this exercise, we explicitly started a MongoDB instance with MMAPv1 storage engine in step 1. In step 2, we stored the db.serverStatus() command's output in a temporary variable to inspect the output of the server's storageEngine key. This helps us see which storage engine our MongoDB instance is running on. In step 3, we switched to database mydb and ran a simple JavaScript function to add 100 documents to a collection called mycol. Next, in step 4, we created a backup directory /data/backup which is passed as a parameter to mongodump utility. We will discuss more about the mongodump utility in Chapter 6, Managing MongoDB Backups.
Once we shutdown the mongod instance, in step 5, we are now ready to start a new instance of MongoDB but this time with WiredTiger storage engine. We follow the basic practice of covering for failure and instead of removing /data/db, we create a new path for this instance (#AlwaysHaveABackupPlan). Our new MongoDB instance is empty, so in step 7 we import the aforementioned backup into the database using the mongorestore utility. As the new MongoDB instance is running WiredTiger storage engine, our backup (which is essentially BSON data) is restored and saved on disk using this storage engine. Lastly, in step 8, we simply inspect the storageEngine key on the db.serverStatus() output and confirm that we are indeed using WiredTiger.
As you can see, this is an overly simplistic example of how to convert MongoDB data from one storage engine format to another. One has to keep in mind that this operation will take a significant amount of time depending on the size of data. However, application downtime can be averted if we were to use a replica set. More on this later.
In this recipe we will be looking at how to optimize on disk I/O by separating databases in different directories.
Ensure you have a MongoDB database installation ready.
/data/mongodb/bin/mongod --dbpath /data/db
mongo localhost:27017
> use mydb
> db.mycol.insert({foo:1})
ls /data/db
total 244
drwxr-xr-x 4 root root 4096 May 21 08:45 .
drwxr-xr-x 10 root root 4096 May 21 08:42 ..
-rw-r--r-- 1 root root 16384 May 21 08:43 collection-0-626293768203557661.wt
-rw-r--r-- 1 root root 16384 May 21 08:43 collection-2-626293768203557661.wt
-rw-r--r-- 1 root root 16384 May 21 08:43 collection-5-626293768203557661.wt
drwxr-xr-x 2 root root 4096 May 21 08:45 diagnostic.data
-rw-r--r-- 1 root root 16384 May 21 08:43 index-1-626293768203557661.wt
-rw-r--r-- 1 root root 16384 May 21 08:43 index-3-626293768203557661.wt
-rw-r--r-- 1 root root 16384 May 21 08:43 index-4-626293768203557661.wt
-rw-r--r-- 1 root root 16384 May 21 08:43 index-6-626293768203557661.wt
drwxr-xr-x 2 root root 4096 May 21 08:42 journal
-rw-r--r-- 1 root root 16384 May 21 08:43 _mdb_catalog.wt
-rw-r--r-- 1 root root 6 May 21 08:42 mongod.lock
-rw-r--r-- 1 root root 16384 May 21 08:44 sizeStorer.wt
-rw-r--r-- 1 root root 95 May 21 08:42 storage.bson
-rw-r--r-- 1 root root 49 May 21 08:42 WiredTiger
-rw-r--r-- 1 root root 4096 May 21 08:42 WiredTigerLAS.wt
-rw-r--r-- 1 root root 21 May 21 08:42 WiredTiger.lock
-rw-r--r-- 1 root root 994 May 21 08:45 WiredTiger.turtle
-rw-r--r-- 1 root root 61440 May 21 08:45 WiredTiger.wt
mkdir /data/newdb
/data/mongodb/bin/mongod --dbpath /data/newdb --directoryperdb
mongo localhost:27017
> use mydb
> db.mycol.insert({bar:1})
ls /data/newdb
total 108
drwxr-xr-x 7 root root 4096 May 21 08:42 .
drwxr-xr-x 10 root root 4096 May 21 08:42 ..
drwxr-xr-x 2 root root 4096 May 21 08:41 admin
drwxr-xr-x 2 root root 4096 May 21 08:42 diagnostic.data
drwxr-xr-x 2 root root 4096 May 21 08:41 journal
drwxr-xr-x 2 root root 4096 May 21 08:41 local
-rw-r--r-- 1 root root 16384 May 21 08:42 _mdb_catalog.wt
-rw-r--r-- 1 root root 0 May 21 08:42 mongod.lock
drwxr-xr-x 2 root root 4096 May 21 08:41 mydb
-rw-r--r-- 1 root root 16384 May 21 08:42 sizeStorer.wt
-rw-r--r-- 1 root root 95 May 21 08:41 storage.bson
-rw-r--r-- 1 root root 49 May 21 08:41 WiredTiger
-rw-r--r-- 1 root root 4096 May 21 08:42 WiredTigerLAS.wt
-rw-r--r-- 1 root root 21 May 21 08:41 WiredTiger.lock
-rw-r--r-- 1 root root 986 May 21 08:42 WiredTiger.turtle
-rw-r--r-- 1 root root 28672 May 21 08:42 WiredTiger.wt
We start by running a mongod instance with no special parameters except for --dbpath. In step 2, we create a new database mydb and insert a document in the collection mycol, using the mongo shell. By doing this, the data files for this new db are created and can be seen by inspecting the directory structure of our main database path /data/db. In that, among other files, you can see that database files begin with collection-<number> and its relevant index file begins with index-<number>. As we guessed, all databases and their relevant files are within the same directory as our db path.
If you are curious and wish to find the correlation between the files and the db, then run the following commands in mongo shell:
> use mydb
> var curiosity = db.mycol.stats()
> curiosity['wiredTiger']['uri']
statistics:table:collection-5-626293768203557661
The last part of this string that is, collection-5-626293768203557661 corresponds to the file in our /data/db path.
Moving on, in steps 4 and step 5, we stop the previous mongod instance, create a new path for our data files and start a new mongod instance but this time with the --directoryperdb parameter. As before, in step 6 we insert some random data in the mycol collection of a new database called mydb. In step 7, we look at the directory listing of our data path and we can see that there is a subdirectory in the data path which, as you guessed, matches our database name mydb. If you look inside this directory that is, /data/newdb/mydb, you should see a collection and an index file.
So one might ask, why go through all this trouble for having separate directories for databases? Well, in certain application scenarios, if your database workloads are significantly high, you should consider storing the database on a separate disk/volume. Ideally, this should be a physically separate disk or a RAID volume created using separate physical disks. This ensures the separation of disk I/O from other operations including MongoDB journals. Additionally, this also helps you separate your fault domains. One thing you should keep in mind is that journals are stored separately, that is, outside the database's directory. So, using separate disks for databases allows the journals to not content for same disk I/O path.
In all the previous recipes of this chapter, we have passed command line flags to the mongod daemon. In this recipe, we will look at how to use the config file as an alternative to passing command line flags.
Nothing special, just make sure you have a MongoDB database installation ready.
storage:
dbPath: /data/db
engine: wiredTiger
directoryPerDB: true
net:
port: 27000
bindIp: 127.0.0.1
ssl:
mode: requireSSL
PEMKeyFile: /data/mongo-secure.pem
mongodb/bin/mongod --config /data/mongod.conf
MongoDB allows passing command line parameters to mongod using a YAML file. In step 1, we are creating a config file called mongod.conf. We add all the previously used command line parameters from this chapter, into this config file in YAML format. A quick look at the file's content should make it clear that the parameters are divided into sections and relevant subsections. Next, in step 2, we start the mongod instance, but this time with just one parameter --config followed by the path of our config file.
As we saw in earlier recipes, although passing configuration parameters seems normal, it is highly advisable that one should use configuration files instead. Having all parameters in a single configuration file not only makes it easier in terms of viewing the parameters but also helps us programmatically (YAML FTW!) inspect and manage the values of these variables. This simplifies operations and reduces the chance of errors.
Do have a look at other parameters available in the configuration file https://docs.mongodb.com/manual/reference/configuration-options/.
In this recipe, we will look at how to run MongoDB as a Docker container. I will assume that you are familiar with the bare minimum understanding of how Docker works. If you are not, have a look at https://www.docker.com/what-container. It should help you get acquainted with Docker's concepts.
Make sure you have Docker installed on your system. If you are using Linux, then it is highly advisable to use kernel version 3.16 or higher.
docker pull mongo:3.4.4
docker images
docker run -d -v /data/db:/data/db --name mymongo mongo:3.4.4
docker ps
docker exec -it mymongo mongo
docker run -d -v /data/db:/data/db --name mymongo --net=host mongo:3.4.4 --bind_ip 127.0.0.1 --port 27000
docker exec -it mymongo mongo localhost:27000
In step 1, we fetched the official MongoDB image, from Docker's public repository. You can view it at https://hub.docker.com/_/mongo/. While fetching the image we explicitly mentioned the version that is, mongo:3.4.4.. Although mentioning the version (also known as Docker image tag) is optional, it is highly advisable that when you download any application images via Docker, always fetch them with the relevant tag. Otherwise, you might end up fetching the latest tag and as they change often, you would end up running different versions of you applications.
Next, in step 2, we run the docker images command which shows us a list of images available on the server, in our case it should show you the MongoDB image with the tag 3.4.4 available for use.
In step 3, we start a container in detached (-d) mode. As all containers use ephemeral storage and as we wish to retain the data, we mount a volume (-v) by providing it a local path /data/db that can be mounted to the container's internal directory /data/db. This ensures that even if the container is stopped/removed, our data on the host machine is retained on the host's /data/db path. At this point, one could also use Docker's volumes, but in order to keep things simplified, I prefer using a regular directory. Next, in the command we provide a name (--name) for our container. This is followed by the Docker image and tag that should be used to run the container, in our case it would be mongo:3.4.4. When you enter the command, you should get a large string as a return value, this is your new container's ID.
In step 4, we run the docker ps command which shows us a list of running containers. If, in case your container is stopped or exited, use docker ps -a to show all containers. In the output you can see the container's details. By default, Docker starts a container in bridge mode that is, when Docker is installed, it creates a bridge interface on the host and the resulting containers are run using a virtual network device attached to the bridge. This results in complete network isolation of the container. Thus, in our case, if we wish to connect to the container's mongod instance on 27017, we would need to explicitly expose TCP port 27017 to the base host or bind the base host's port to that of the container thus allowing an external MongoDB client to connect to our instance. You can read more about Docker's networking architecture at https://docs.docker.com/engine/userguide/networking/.
In step 5, we execute the mongo shell command from the container to connect to the mongod instance. The official MongoDB container image also takes in command-line flags, by passing them in the docker run command. We do this in step 6 along with running the container in host mode networking. Host mode networking binds the server's network namespace onto the container thus bypassing the bridge interface. We pass the --bind_ip and --port flags to the docker run command which instructs mongod to bind to 127.0.0.1:27000. As we are using host mode networking, the mongod daemon would effectively bind to the base host's loopback interface. In step 7, we connect to our new MongoDB instance but this time we explicitly provide the connection address.
If you ever wish to debug the container, you can always run the container in the foreground by passing the -it parameters in place of -d. Additionally, try running the following command and check the output:
docker logs mymongo
Lastly, I would suggest you have a look at the start scripts used by this container's image to understand how configurations are templatized. It will definitely give you some pointers that will help when you are setting up your own MongoDB container.
With this recipe, we conclude this chapter. I hope these recipes have helped you gear up for getting started with MongoDB. As all things go, no amount of text can replace actual practice. So I sincerely request you to get your hands dirty and attempt these recipes yourself.
In the next chapter, we will take a closer look at MongoDB's indexes and how they can be leveraged to gain a tremendous performance boost in data retrieval.
In this chapter, we will be covering the following topics:
In this chapter, we are going to look at how to create and manage database indexes in MongoDB. We will also look at how to view index sizes, create background indexes and creating various forms of indexes. So let's get started!
In this recipe, we will be using a fairly large dataset and add it into MongoDB. Then we will examine how a query executes in this dataset with and without an index.
Assuming that you are already running a MongoDB server, we will be importing a dataset of around 100,000 records available in the form of a CSV file called chapter_2_mock_data.csv. You can download this file from the Packt website.
$mongoimport --headerline --ignoreBlanks --type=csv -d mydb -c mockdata -h localhost chapter_2_mock_data.csv
You should see output like this:
2017-06-18T08:25:08.444+0530 connected to: localhost
2017-06-18T08:25:09.498+0530 imported 100000 documents
mongo localhost:27017
use mydb
db.mockdata.count()
You should see the following result:
105000
> db.mockdata.find({city:'Singapore'}).explain("executionStats")
You should see the following result:
{
"executionStats": {
"executionStages": {
"advanced": 1,
"direction": "forward",
"docsExamined": 100000,
"executionTimeMillisEstimate": 44,
"filter": {
"city": {
"$eq": "Singapore"
}
},
"invalidates": 0,
"isEOF": 1,
"nReturned": 1,
"needTime": 100000,
"needYield": 0,
"restoreState": 783,
"saveState": 783,
"stage": "COLLSCAN",
"works": 100002
},
"executionSuccess": true,
"executionTimeMillis": 41,
"nReturned": 1,
"totalDocsExamined": 100000,
"totalKeysExamined": 0
},
"ok": 1,
"queryPlanner": {
"indexFilterSet": false,
"namespace": "mydb.mockdata",
"parsedQuery": {
"city": {
"$eq": "Singapore"
}
},
"plannerVersion": 1,
"rejectedPlans": [],
"winningPlan": {
"direction": "forward",
"filter": {
"city": {
"$eq": "Singapore"
}
},
"stage": "COLLSCAN"
}
},
"serverInfo": {
"gitVersion": "888390515874a9debd1b6c5d36559ca86b44babd",
"host": "vagrant-ubuntu-trusty-64",
"port": 27017,
"version": "3.4.4"
}
}
> db.mockdata.createIndex({'city': 1})
The following result is obtained:
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
> db.mockdata.find({city:'Singapore'}).explain("executionStats")
{
"executionStats": {
"executionStages": {
"advanced": 1,
"alreadyHasObj": 0,
"docsExamined": 1,
"executionTimeMillisEstimate": 0,
"inputStage": {
"advanced": 1,
"direction": "forward",
"dupsDropped": 0,
"dupsTested": 0,
"executionTimeMillisEstimate": 0,
"indexBounds": {
"city": [
"[\"Singapore\", \"Singapore\"]"
]
},
"indexName": "city_1",
"indexVersion": 2,
"invalidates": 0,
"isEOF": 1,
"isMultiKey": false,
"isPartial": false,
"isSparse": false,
"isUnique": false,
"keyPattern": {
"city": 1
},
"keysExamined": 1,
"multiKeyPaths": {
"city": []
},
"nReturned": 1,
"needTime": 0,
"needYield": 0,
"restoreState": 0,
"saveState": 0,
"seeks": 1,
"seenInvalidated": 0,
"stage": "IXSCAN",
"works": 2
},
"invalidates": 0,
"isEOF": 1,
"nReturned": 1,
"needTime": 0,
"needYield": 0,
"restoreState": 0,
"saveState": 0,
"stage": "FETCH",
"works": 2
},
"executionSuccess": true,
"executionTimeMillis": 0,
"nReturned": 1,
"totalDocsExamined": 1,
"totalKeysExamined": 1
},
"ok": 1,
"queryPlanner": {
"indexFilterSet": false,
"namespace": "mydb.mockdata",
"parsedQuery": {
"city": {
"$eq": "Singapore"
}
},
"plannerVersion": 1,
"rejectedPlans": [],
"winningPlan": {
"inputStage": {
"direction": "forward",
"indexBounds": {
"city": [
"[\"Singapore\", \"Singapore\"]"
]
},
"indexName": "city_1",
"indexVersion": 2,
"isMultiKey": false,
"isPartial": false,
"isSparse": false,
"isUnique": false,
"keyPattern": {
"city": 1
},
"multiKeyPaths": {
"city": []
},
"stage": "IXSCAN"
},
"stage": "FETCH"
}
},
"serverInfo": {
"gitVersion": "888390515874a9debd1b6c5d36559ca86b44babd",
"host": "vagrant-ubuntu-trusty-64",
"port": 27017,
"version": "3.4.4"
}
}
In step 1, we used the mongoimport utility to import our sample dataset from chapter_2_mock_data.csv which is a comma separated file. We'll discuss more about mongoimport in later chapters, so don't worry about it for now. Once we import the data, we execute the mongo shell and confirm that we've indeed imported our sample dataset (100,000 documents).
In step 4, we run a simple find() function chained with the explain() function. The explain() function shows us all the details about the execution of our query, especially the executionStats. In this, if you look at the value of key executionStages['stage'], you can see it says COLLSAN. This indicates that the entire collection was scanned, which can be confirmed by looking at the totalDocsExamined key's value, which should say 100,000. Clearly our collection needs an index!
In step 5, we create and index by calling db.mockdata.createIndex({'city': 1}). In createIndex() function, we mention the city field with value of 1 which tells MongoDB to create an ascending index on this key. You can use -1 to create a descending index, if need be. By executing this function, MongoDB immediately begins creating an index on the collection.
In step 6, we execute the exact same find() query, as we did in step 4, and upon inspecting the executionStats, you can observe that the value of key executionStages now contains some more details. Especially, the value of stage key is FETCH and the inputStages['stage'] is IXSCAN. In short, this indicates that the query was fetched from by running an index scan. As this was a direct index hit, the value of totalDocsExamined is 1.
Over time, you may come across scenarios that require redesigning your indexing strategy. This may be by adding a new feature in your application or simply by identifying a more appropriate key that can be indexed. In either case, it is highly advisable to remove older (unused) indexes to ensure you do not have any unnecessary overhead on the database.
In this recipe, we will be looking at some common operations we can perform on indexes like viewing, deleting, checking index sizes, and re-indexing.
For this recipe, load the sample dataset and create an index on the city field, as described in the previous recipe.
> db.mockdata.getIndexes()
The following result is obtained:
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "mydb.mockdata"
},
{
"v" : 2,
"key" : {
"city" : 1,
"first_name" : 1
},
"name" : "city_1_first_name_1",
"ns" : "mydb.mockdata"
}
]
> db.mockdata.dropIndex('city_1_first_name_1')
You should see the following result:
{ "nIndexesWas" : 2, "ok" : 1 }
> db.mockdata.createIndex({'city':1}, {name: 'city_index'})
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
> db.mockdata.getIndexes()
We should see the following result:
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "mydb.mockdata"
},
{
"v" : 2,
"key" : {
"city" : 1
},
"name" : "city_index",
"ns" : "mydb.mockdata"
}
]
> db.mockdata.createIndex({'city':1})
You should see the following message:
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 2,
"numIndexesAfter" : 2,
"note" : "all indexes already exist",
"ok" : 1
}
stats = db.mockdata.stats()
stats["totalIndexSize"]
It should show the following result:
1818624
stats["indexSizes"]
This should show the following result:
{ "_id_" : 905216, "city_index" : 913408 }
> db.mockdata.reIndex('city_index')
The following result is obtained:
{
"nIndexesWas" : 2,
"nIndexes" : 2,
"indexes" : [
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "mydb.mockdata"
},
{
"v" : 2,
"key" : {
"city" : 1
},
"name" : "city_index",
"ns" : "mydb.mockdata"
}
],
"ok" : 1
}
Most of the commands are pretty self-explanatory. In steps 1 and 2, we view and delete index respectively. You can also use db.<collection>.dropIndexes() to delete all indexes. In step 3, we recreate the index on the city field, but this time we provide an additional parameter to customize the name of the index. This can be confirmed by viewing the output of the getIndexes() command, as shown in step 3. Next, in step 4, we try to create another index on the city field (in ascending order). However, as we already have an index on this field, this would be redundant and hence MongoDB does not allow it. If you change the 1 to -1 that is, change the sort order to descending, then your operation would succeed and you'd end up with another index on the city field, but sorted in descending order.
In step 5, we run the stats() function on the collection which can alternately be run as db.mockdata.runCommand('collstats') and save its output in a temporary variable called stats. If we inspect the totalIndexSizeand indexSizes keys, we can find the total as well as index specific sizes, respectively. At this point, I would strongly suggest you have a look at other keys in the output. It should give you a peek into the low-level internals of how MongoDB manages each collection.
Lastly, in step 6, we re-index an existing index. In that, it drops the existing index and rebuilds it either in the foreground or background, depending on how it was set up initially. It is usually not necessary to rebuild the index, however, as per MongoDB's documentation you may choose to do so if you feel that the index size may be disproportionate or your collection has significantly grown in size.
The beauty of indexes is that they can be used with multiple keys. A single key index can be thought of as a table with one column. A multi-key index or compound index can be visualized as a multi column table where the first column is sorted first, and then the next, and so on. In this recipe, we will look at how to create a compound index and examine how it works.
Load the sample dataset and create an index on the city field, as described in the previous recipe.
> plan = db.mockdata.find({city:'Boston', first_name: 'Sara'}).explain("executionStats")
> plan['executionStats']
You should see the following result:
{
"executionSuccess" : true,
"nReturned" : 1,
"executionTimeMillis" : 0,
"totalKeysExamined" : 9,
"totalDocsExamined" : 9,
"executionStages" : {
"stage" : "FETCH",
"filter" : {
"first_name" : {
"$eq" : "Sara"
}
},
"nReturned" : 1,
"executionTimeMillisEstimate" : 0,
"works" : 10,
"advanced" : 1,
"needTime" : 8,
"needYield" : 0,
"saveState" : 0,
"restoreState" : 0,
"isEOF" : 1,
"invalidates" : 0,
"docsExamined" : 9,
"alreadyHasObj" : 0,
"inputStage" : {
"stage" : "IXSCAN",
"nReturned" : 9,
"executionTimeMillisEstimate" : 0,
"works" : 10,
"advanced" : 9,
"needTime" : 0,
"needYield" : 0,
"saveState" : 0,
"restoreState" : 0,
"isEOF" : 1,
"invalidates" : 0,
"keyPattern" : {
"city" : 1
},
"indexName" : "city_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"city" : [ ]
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"city" : [
"[\"Boston\", \"Boston\"]"
]
},
"keysExamined" : 9,
"seeks" : 1,
"dupsTested" : 0,
"dupsDropped" : 0,
"seenInvalidated" : 0
}
}
}
> db.mockdata.dropIndex('city_1')
You should see an output similar to this:
{ "nIndexesWas" : 2, "ok" : 1 }
> db.mockdata.createIndex({'city': 1, 'first_name': 1})
You should see an output similar to this:
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
> plan = db.mockdata.find({city:'Boston', first_name: 'Sara'}).explain("executionStats")
> plan['executionStats']
You should see an output similar to this:
{
"executionSuccess": true,
"nReturned": 1,
"executionTimeMillis": 0,
"totalKeysExamined": 1,
"totalDocsExamined": 1,
"executionStages": {
"stage": "FETCH",
"nReturned": 1,
"executionTimeMillisEstimate": 0,
"works": 2,
"advanced": 1,
"needTime": 0,
"needYield": 0,
"saveState": 0,
"restoreState": 0,
"isEOF": 1,
"invalidates": 0,
"docsExamined": 1,
"alreadyHasObj": 0,
"inputStage": {
"stage": "IXSCAN",
"nReturned": 1,
"executionTimeMillisEstimate": 0,
"works": 2,
"advanced": 1,
"needTime": 0,
"needYield": 0,
"saveState": 0,
"restoreState": 0,
"isEOF": 1,
"invalidates": 0,
"keyPattern": {
"city": 1,
"first_name": 1
},
"indexName": "city_1_first_name_1",
"isMultiKey": false,
"multiKeyPaths": {
"city": [],
"first_name": []
},
"isUnique": false,
"isSparse": false,
"isPartial": false,
"indexVersion": 2,
"direction": "forward",
"indexBounds": {
"city": [
"[\"Boston\", \"Boston\"]"
],
"first_name": [
"[\"Sara\", \"Sara\"]"
]
},
"keysExamined": 1,
"seeks": 1,
"dupsTested": 0,
"dupsDropped": 0,
"seenInvalidated": 0
}
}
}
We start with loading the sample dataset with an index on the city field. Next, we execute a find() command on our collection chained with the explain('executionStats') function, in steps 2 and 3 respectively. This time, we capture the output of the data in a variable so it is easier to examine for later use.
In step 4, we specifically examine the execution stats. We can observe that nine documents were fetched from the index in which we had one match. When we ran db.mockdata.find({city:'Boston', first_name: 'Sara'}), MongoDB first saw that the city field already has an index on it. So, for the remaining part of the query, MongoDB simply searched the documents which were returned from the index and searched on the field first_name in these documents until they matched the value Sara.
In step 5, we remove the existing index on field city and in step 6, we create a compound index on two field names city and first_name. At this point, I would like to point, that the sequence of the field names is extremely important. As I explained in the introduction of this recipe, compound indexes in MongoDB are created in the order in which the field names are mentioned. Hence, when we create a compound index with, say, {city:1, first_name:1}, MongoDB first creates a B-tree index on the field city and an ascending order followed by first_name in an ascending order.
In step 7, we run the same find() query and examine the executionStats. We can observe that this time, as both keys were indexed, totalDocumentsExamined was 1 that is, we got an exact match in our compound index.
Compound indexes, if used smartly, can dramatically reduce your document seek times. For example, let's assume our application had a view that only required us to show a list of names in a city. A traditional approach would be to run a find query and get the list of documents and send them to the application's view. However, we know that other fields in the document are not needed for this view. Then, by having a compound index on city and first_name with the addition of field projection, we simply send the index values down to the application that is:
db.mockdata.find({city:'Boston', first_name:'Sara'}, {city:1, first_name:1, _id:0})
By doing this, not only do we leverage the speed of the index but we negate the need to fetch the non-indexed keys. Another term used for this is a covered query and it can improve our applications significantly!
Also, compound indexes allow us to use the index for the leftmost key. In our example, if we were to run db.mockdata.find({city:'Boston'}), then the result would be fetched from the index. However, if we were to search on the first_name that is, db.mockdata.find({first_name:'Sara'}), the server would do a full collection scan and fetch the result. I would encourage you to run the preceding queries chained with the explain() function and see the details yourself.
In the previous recipes, whenever we've created indexes, it has always been in the foreground that is, the database server blocks all changes to the database until the index creation is completed. This is definitely not suitable for larger datasets where index creation time can take a few seconds which could be application errors.
Load the sample dataset, as shown in the Creating an index recipe.
> db.mockdata.dropIndexes()
{
"nIndexesWas" : 2,
"msg" : "non-_id indexes dropped for collection",
"ok" : 1
}
for x in $(seq 20); do mongoimport --headerline --type=csv -d mydb -c mockdata -h localhost chapter_2_mock_data.csv;done
> db.mockdata.createIndex({city:1, first_name:1, last_name:1})
> db.mockdata.insert({foo:'bar'})
2017-06-13T03:54:26.296+0000 I INDEX [conn1] build index on: mydb.mockdata properties: { v: 2, key: { city: 1.0, first_name: 1.0, last_name: 1.0 }, name: "city_1_first_name_1_last_name_1", ns: "mydb.mockdata" }
2017-06-13T03:54:26.297+0000 I INDEX [conn1] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2017-06-13T03:54:36.575+0000 I INDEX [conn1] build index done. scanned 2100001 total records. 10 secs
2017-06-13T03:54:36.576+0000 I COMMAND [conn2] command mydb.mockdata appName: "MongoDB Shell" command: insert { insert: "mockdata", documents: [ { _id: ObjectId('59474af356e41a7db57952b6'), foo: "bar" } ], ordered: true } ninserted:1 keysInserted:3 numYields:0 reslen:29 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 9307131 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 9307ms
2017-06-13T03:54:36.577+0000 I COMMAND [conn1] command mydb.$cmd appName: "MongoDB Shell" command: createIndexes { createIndexes: "mockdata", indexes: [ { key: { city: 1.0, first_name: 1.0, last_name: 1.0 }, name: "city_1_first_name_1_last_name_1" } ] } numYields:0 reslen:98 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 10284ms
> db.mockdata.createIndex({city:1, first_name:1, last_name:1}, {background:1})
> db.mockdata.insert({foo:'bar'})
You should see the following output:
WriteResult({ "nInserted" : 1 })
2017-06-13T04:00:29.248+0000 I INDEX [conn1] build index on: mydb.mockdata properties: { v: 2, key: { city: 1.0, first_name: 1.0, last_name: 1.0 }, name: "city_1_first_name_1_last_name_1", ns: "mydb.mockdata", background: 1.0 }
2017-06-13T04:00:32.008+0000 I - [conn1] Index Build (background): 397400/2200004 18%
2017-06-13T04:00:35.002+0000 I - [conn1] Index Build (background): 673800/2200005 30%
2017-06-13T04:00:38.009+0000 I - [conn1] Index Build (background): 762300/2200005 34%
2017-06-13T04:00:41.006+0000 I - [conn1] Index Build (background): 903400/2200005 41%
<< --- output snipped --- >>
2123200/2200005 96%
2017-06-13T04:02:32.021+0000 I - [conn1] Index Build (background): 2148300/2200005 97%
2017-06-13T04:02:35.021+0000 I - [conn1] Index Build (background): 2172800/2200005 98%
2017-06-13T04:02:38.019+0000 I - [conn1] Index Build (background): 2195800/2200005 99%
2017-06-13T04:02:38.566+0000 I INDEX [conn1] build index done. scanned 2100006 total records. 129 secs
2017-06-13T04:02:38.572+0000 I COMMAND [conn1] command mydb.$cmd appName: "MongoDB Shell" command: createIndexes { createIndexes: "mockdata", indexes: [ { key: { city: 1.0, first_name: 1.0, last_name: 1.0 }, name: "city_1_first_name_1_last_name_1", background: 1.0 } ] } numYields:20353 reslen:98 locks:{ Global: { acquireCount: { r: 20354, w: 20354 } }, Database: { acquireCount: { w: 20354, W: 2 } }, Collection: { acquireCount: { w: 20354 } } } protocol:op_command 129326ms
In step 1 we remove any existing indexes. Next, in order to better simulate index creation delays, what we do is simply keep reimporting our sample dataset about 20 times. This should give us about 2 million records in our collection after the end of step 2. As I have the previous recipes' sample dataset, my document count may be slightly higher so don't worry about it.
Now, in order to test how foreground index creation hinders database operations, we need to be able to perform two tasks simultaneously. For this, we set up two terminal windows, preferably side by side, with mongo shells connected and ensure mydb is selected. In step 4, we create a index on three fields city, first_name, and last_name. Again, this is intentional to add a bit of computational overhead for our test database setup. Note that, unlike previous runs, this command will not yield immediately. So, switch to the next terminal windows and try inserting a simple record, as shown in step 5. If you have both window stacked side by side, you will notice that both of them yield almost simultaneously. If you look at mongod server logs, you can see that both operations, in this case, took roughly 10 seconds to complete. Also, as expected, our insert query did not complete until the index creation had released the lock on the collection.
In step 7, we delete the index again and in step 8 we recreate the index but this time with the option {background: 1}. This tells mongod to start the index creation process in the background. In step 9, we switch to the other terminal window and try inserting a random document to our collection. Lo and behold, our document gets inserted immediately. Now is a good time to switch to the mongod server logs. As shown in step 10, you can now see that the index creation is happening in small batches. When the index creation completes, you can see that mongod acquired about 20,354 locks for this process as opposed to 1, when creating index in foreground. This lock and release method allowed our insert query to go through. However, this does come with a slight trade-off. The index creation time in the background was about 130 seconds as compared to 10 seconds, when created in the foreground.
There you have it, a simple test to show the effectiveness of creating background indexes. As real-world production scenarios go, it is always safe to create indexes in the background unless you have a very strong reason otherwise.
In this recipe, we will explore the expireAfterSeconds property of MongoDB indexes to allow automatic deletion of documents from a collection.
For this recipe, all you need is a mongod instance running. We will be creating and working on a new collection called ttlcol in the database mydb.
db.ttlcol.drop()
for(var x=1; x<=100; x++){
var past = new Date()
past.setSeconds(past.getSeconds() - (x * 60))
// Insert a document with timestamp in the past
var doc = {
foo: 'bar',
timestamp: past
}
db.ttlcol.insert(doc)
// Insert a document with timestamp in the future
var future = new Date()
future.setSeconds(future.getSeconds() + (x * 60))
var doc = {
foo: 'bar',
timestamp: future
}
db.ttlcol.insert(doc)
}
db.ttlcol.count()
db.ttlcol.createIndex({timestamp:1}, {expireAfterSeconds: 10})
You should see output similar to this:
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
db.ttlcol.count()
The number of documents returned should be lower than 200.
In step 1, we emptied the ttlcol collection in mydb to ensure there is no old data. Next, in step 2, we ran a simple JavaScript code that adds 200 records, each having a BSON Date() field called timestamp. We added about 100 records in the past and 100 in the future each 1 minute in the past and future respectively.
Then in step 3, we created a regular index but with an additional parameter {expireAfterSeconds: 10}. In this, we are telling the server to expire documents 10 seconds from the value of time mentioned in our timestamp field. Once this is added, you can check that the number of documents in the collection has reduced from 200 to, in this case, 113 and counting. What happens here is that there is a background thread in MongoDB server that wakes up every minute and removes any document that matches our index's condition. At this point, I would like to point out that if our field timestamp were not a valid Date() function or an array of Date() function, then no documents would be removed.
If you wish to have explicit expiry times, then set the expireAfterSecond to 0. In that, the documents would be removed as soon as they match the desired field's timestamp.
So when would you need a TTL-based index? Well if you happen to store time sensitive documents like user session times or documents that can be removed after a certain period like events, logs, or transaction history, then TTL-based indexes are your best option. They offer you more control over document retention than traditional capped collections.
MongoDB allows you to create an index on fields that may not exist in all documents, in a given collection. These are called sparse indexes and in this recipe, we will look at how to create them.
For this recipe, load the sample dataset and create an index on the city field, as described in the Creating an index recipe.
db.mockdata.count()
The preceding command should return 100000.
db.mockdata.find({language: {$eq:null}}).count()
The preceding command should return 12704.
db.mockdata.createIndex({language:1}, {sparse: true})
You should see output similar to this:
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
db.mockdata.getIndexes()
The preceding command should give you output similar to this:
[
{
"key": {
"_id": 1
},
"name": "_id_",
"ns": "test.mockdata",
"v": 2
},
{
"key": {
"language": 1
},
"name": "language_1",
"ns": "test.mockdata",
"sparse": true,
"v": 2
}
]
db.mockdata.find({language: 'French'}).explain('executionStats')['executionStats']
The preceding command should give you output similar to this:
"executionStages": {
"advanced": 893,
"alreadyHasObj": 0,
"docsExamined": 893,
"executionTimeMillisEstimate": 0,
"inputStage": {
"advanced": 893,
"direction": "forward",
"dupsDropped": 0,
"dupsTested": 0,
"executionTimeMillisEstimate": 0,
"indexBounds": {
"language": [
"[\"French\", \"French\"]"
]
},
"indexName": "language_1",
"indexVersion": 2,
"invalidates": 0,
"isEOF": 1,
"isMultiKey": false,
"isPartial": false,
"isSparse": true,
"isUnique": false,
"keyPattern": {
"language": 1
},
"keysExamined": 893,
"multiKeyPaths": {
"language": []
},
"nReturned": 893,
"needTime": 0,
"needYield": 0,
"restoreState": 6,
"saveState": 6,
"seeks": 1,
"seenInvalidated": 0,
"stage": "IXSCAN",
"works": 894
},
"invalidates": 0,
"isEOF": 1,
"nReturned": 893,
"needTime": 0,
"needYield": 0,
"restoreState": 6,
"saveState": 6,
"stage": "FETCH",
"works": 894
},
"executionSuccess": true,
"executionTimeMillis": 1,
"nReturned": 893,
"totalDocsExamined": 893,
"totalKeysExamined": 893
}
For this example, we have picked a sparsely populated field, language, which does not exist in all documents of our sample dataset. In step 1, we can see that around 12,000 documents do not contain this field. Next, in step 2, we create an index with the optional parameter {sparse: true} which tells MongoDB server to create a sparse index on our field, language. The index gets created and works just like any other index as seen in steps 3 and step 4, respectively.
Partial indexes were introduced recently, in MongoDB Version 3.2. A partial index is slightly similar to sparse index but with the added advantage of being able to use expressions ($eq, $gt, and so on) and operators ($and).
For this recipe, load the sample dataset and create an index on the city field, as described in the Creating an index recipe.
db.mockdata.count()
The preceding command should return 100000:
db.mockdata.find({language: {$eq:null}}).count()
The preceding command should return 12704.
> db.mockdata.createIndex(
{first_name:1},
{partialFilterExpression: { language: {$exists: true}}}
)
This should give you output similar to this:
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
db.mockdata.getIndexes()
The preceding command should give you output similar to this:
[
{
"key": {
"_id": 1
},
"name": "_id_",
"ns": "mydb.mockdata",
"v": 2
},
{
"key": {
"first_name": 1
},
"name": "first_name_1",
"ns": "mydb.mockdata",
"partialFilterExpression": {
"language": {
"$exists": true
}
},
"v": 2
}
]
db.mockdata.find({first_name: 'Sara'}).explain('executionStats')['executionStats']
The preceding command should give you output similar to this:
{
"executionStages": {
"advanced": 7,
"direction": "forward",
"docsExamined": 100000,
"executionTimeMillisEstimate": 21,
"filter": {
"first_name": {
"$eq": "Sara"
}
},
"invalidates": 0,
"isEOF": 1,
"nReturned": 7,
"needTime": 99994,
"needYield": 0,
"restoreState": 782,
"saveState": 782,
"stage": "COLLSCAN",
"works": 100002
},
"executionSuccess": true,
"executionTimeMillis": 33,
"nReturned": 7,
"totalDocsExamined": 100000,
"totalKeysExamined": 0
}
db.mockdata.find({first_name: 'Sara', language: 'Spanish'}).explain('executionStats')['executionStats']
The preceding command should give you output similar to this:
{
"executionStages": {
"advanced": 1,
"alreadyHasObj": 0,
"docsExamined": 7,
"executionTimeMillisEstimate": 0,
"filter": {
"language": {
"$eq": "Spanish"
}
},
"inputStage": {
"advanced": 7,
"direction": "forward",
"dupsDropped": 0,
"dupsTested": 0,
"executionTimeMillisEstimate": 0,
"indexBounds": {
"first_name": [
"[\"Sara\", \"Sara\"]"
]
},
"indexName": "first_name_1",
"indexVersion": 2,
"invalidates": 0,
"isEOF": 1,
"isMultiKey": false,
"isPartial": true,
"isSparse": false,
"isUnique": false,
"keyPattern": {
"first_name": 1
},
"keysExamined": 7,
"multiKeyPaths": {
"first_name": []
},
"nReturned": 7,
"needTime": 0,
"needYield": 0,
"restoreState": 0,
"saveState": 0,
"seeks": 1,
"seenInvalidated": 0,
"stage": "IXSCAN",
"works": 8
},
"invalidates": 0,
"isEOF": 1,
"nReturned": 1,
"needTime": 6,
"needYield": 0,
"restoreState": 0,
"saveState": 0,
"stage": "FETCH",
"works": 8
},
"executionSuccess": true,
"executionTimeMillis": 0,
"nReturned": 1,
"totalDocsExamined": 7,
"totalKeysExamined": 7
}
As in the previous recipe, we have picked a sparsely populated field, language, which does not exist in all documents of our sample dataset. In step 1, we can see that around 12,000 documents do not contain this field.
Next, in step 2, we create an index on the field first_name with the optional parameter partialFilterExpression. With this parameter, we have added a condition { language: {$exists: true}}. MongoDB is instructed to create an index on first_name only on documents which have the field language present. If we look at the executionStats in step 4, we can observe that the index is not used if we do a simple search on the field first_name. However, in step 5, we can see that our query is using the MongoDB index if we add an additional parameter of the field language.
Apart from this simple example, there are tons of good variations possible if we use expressions like $lt, $gt, and so on. You can find some more examples at https://docs.mongodb.com/manual/core/index-partial/.
So why would one use a partial index? Say, for example, you have a huge dataset and wish to have an index on a field which is sparsely spread across these documents. Traditional indexes would cause the entire collection to be indexed and may not be optimal if we are going to work on a subset of these documents.
MongoDB allows you to create an index on a field with the option of ensuring that it is unique in the collection. In this recipe, we will explore how it can be done.
For this recipe, we only need a running mongod instance.
use mydb
db.testuniq.insert({foo: 'zoidberg'})
db.testuniq.createIndex({foo:1}, {unique:1})
The preceding command should give you an output similar to this:
{
"createdCollectionAutomatically": false,
"numIndexesAfter": 2,
"numIndexesBefore": 1,
"ok": 1
}
db.testuniq.insert({foo: 'zoidberg'})
The preceding command should give you an error message similar to this:
WriteResult({
"nInserted" : 0,
"writeError" : {
"code" : 11000,
"errmsg" : "E11000 duplicate key error collection: mydb.testuniq index: foo_1 dup key: { : \"zoidberg\" }"
}
})
db.testuniq.dropIndexes()
db.testuniq.insert({foo: 'zoidberg'})
db.testuniq.find()
The preceding command should give you an output similar to this:
{ "_id" : ObjectId("59490cabc14da1366d83254f"), "foo" : "zoidberg" }
{ "_id" : ObjectId("59490d20c14da1366d832551"), "foo" : "zoidberg" }
db.testuniq.createIndex({foo:1}, {unique:1})
The preceding command should give you an output similar to this:
{
"ok" : 0,
"errmsg" : "E11000 duplicate key error collection: mydb.testuniq index: foo_1 dup key: { : \"zoidberg\" }",
"code" : 11000,
"codeName" : "DuplicateKey"
}
In step 1 we inserted a document in a new collection testuniq. Next, in step 2, we created an index on the field foo with the parameter {unique: true}. In step 3, we try to add another record with the same value of field foo as we did earlier and we receive an error as expected.
In step 4, we drop the indexes and add a duplicate record. Next we try to create a new unique index. This time we are not allowed because there are duplicates in our collection.
This is a simple example of how to create an index with unique constraint. Additionally, we can also create unique indexes on fields that have an array, for example {foo: ['bar', 'baz']}. MongoDB would inspect each value of the array and against the index. Try adding a document with the above values and see what happens.
In this chapter we will be covering the following topics:
This chapter is slightly different than the previous ones in that we will be looking at different technical aspects that should be considered to gain optimal performance from a MongoDB setup. As you are probably aware, application performance tuning is a highly nuanced art, hence not all aspects will be covered here. However, I will try and discuss the most important points, which will help pave the way for more critical thinking on the subject.
In this recipe, we will be looking at the importance of provisioning your servers for better disk I/O.
Apart from CPU and memory (RAM), MongoDB, like most database applications, relies heavily on disk operations.
To better understand this dependency, let's look at a very simple example of reading data from a file. Suppose you have a file that contains a few thousand lines, each containing a set of strings in no particular order. If one were to write a program that is used to search a particular string, it would need to open the file, iterate through each line, and search the string. Once the string is found, the program closes the file. As disks are usually much slower than RAM, this approach of opening a file, reading, and closing it on every query, is suboptimal.

To circumvent this, Linux (and most modern operating systems) rely heavily on the cache buffer. The operating system kernel uses this cache to store chunks of data, in blocks, which are frequently read from the disk. So, when a process tries to read a particular file, the kernel first does a lookup in its cache. If the data is not cached, then the kernel reads it from the disks and loads it in the cache. Data is evicted from the cache based on its frequency of use, that is, less used data gets removed first to make room for more frequently accessed data. Additionally, the kernel tries to utilize all available free memory for the cache, but it automatically reduces the cache size if a process requires memory.

The design of this cache was to circumvent the delays inherent in reading and writing on disks. Any application that relies on disk I/O would be greatly impacted by the speed of the disk. RAM, on the other hand, is extremely fast. How fast, you ask? To put it in perspective, most disk operations are in the range of milliseconds (thousands of a second), whereas for RAM, it is in nanoseconds (billionths of a second).
MongoDB is designed quite similarly to this, in that the database server tries to keep the index and the working set in memory. At the same time, for actual disk reads, it heavily relies on the filesystem buffer cache. But even with everything optimized to be in memory, at some point, MongoDB would need to either write to the disk or read from it.
Disk read/write operations are what are commonly referred to ask disk Input/Output Operations Per Second (IOPS). As disk I/O is a blocking operation, the amount of disk IOPS required by MongoDB would eventually determine how fast your database performs.
First things first, disks are slow. Neither magnetic nor solid state disks can perform anywhere near the speed of RAM. As MongoDB tries to store a database index in memory, try to have workloads that utilize the benefits of indexes. It goes without saying that your servers need to have sufficient RAM to store indexes and disk cache. While deciding the optimal RAM capacity for your server, consider aspects such as the percentage rate of growth of data (and indexes), sufficient size for disk buffer cache, and headroom for the underlying operating system. A very simple example for calculating IOPS for a disk would be 1/(average disk latency + average seek time). So, for a disk with 2 ms average latency and 3 ms average seek time, the total supported IOPS would be 1/(0.002 + 0.003) = 200 IOPS. Again, this does not take into account a lot of other factors, such as disk degradation, ECC, and sequential or random seeks.
With a limited cap on disk IOPS, you can substantially increase the server's IOPS capacity by using RAID 0 (disk striping). For example, an array of four disks in RAID 0 would theoretically give you 4 x 200 = 800 IOPS. The trade-off with RAID 0 is that you do not get data redundancy, that is, if a disk fails, your data is lost. But this can be easily rectified by having a MongoDB replica set. However, on the off-chance that you do decide to use any other RAID setup, keep in mind that your write operations will be directly affected by the RAID setup. That is, for RAID 1 or RAID 10 you would be performing two write operations for every one actual disk write. At the same time, RAID 5 and RAID 6 would not be suitable as they increase the additional writes even more.
Lastly, know your application requirements. I cannot stress how important it is to analyze and monitor your applications' read and write operations. It is ideal to have, at the least, a rough estimate on the ratio of reads to writes.
By now, you should have a fair idea of the importance of disk I/O and how it directly impacts your database performance. MongoDB provides a nifty little utility called mongoperf that allows us to quickly measure disk I/O performance.
For this recipe, we only need the mongoperf utility, which is available in the bin directory of your MongoDB installation.
root@ubuntu:~# echo "{ recSizeKB: 8, nThreads: 12, fileSizeMB: 10000, r: true, mmf: false }" | mongoperf
You will get the following result:
mongoperf use -h for help
parsed options:
{ recSizeKB: 8, nThreads: 12, fileSizeMB: 10000, r: true, mmf: false }
creating test file size:10000MB ...
1GB...
2GB...
3GB...
4GB...
5GB...
6GB...
7GB...
8GB...
9GB...
testing...
options:{ recSizeKB: 8, nThreads: 12, fileSizeMB: 10000, r: true, mmf: false }
wthr 12
new thread, total running : 1
read:1 write:0
19789 ops/sec 77 MB/sec
19602 ops/sec 76 MB/sec
19173 ops/sec 74 MB/sec
19300 ops/sec 75 MB/sec
18838 ops/sec 73 MB/sec
19494 ops/sec 76 MB/sec
19579 ops/sec 76 MB/sec
19002 ops/sec 74 MB/sec
new thread, total running : 2
<---- output truncated --->
new thread, total running : 12
read:1 write:0
read:1 write:0
read:1 write:0
read:1 write:0
40544 ops/sec 158 MB/sec
40237 ops/sec 157 MB/sec
40463 ops/sec 158 MB/sec
40463 ops/sec 158 MB/sec

root@ubuntu:~# echo "{ recSizeKB: 8, nThreads: 12, fileSizeMB: 10000, r: true, mmf: true }" | mongoperf
The following result is obtained:
mongoperf
use -h for help
parsed options:
{ recSizeKB: 8, nThreads: 12, fileSizeMB: 10000, r: true, mmf: true }
creating test file size:10000MB ...
1GB...
2GB...
3GB...
4GB...
5GB...
6GB...
7GB...
8GB...
9GB...
testing...
options:{ recSizeKB: 8, nThreads: 12, fileSizeMB: 10000, r: true, mmf: true }
wthr 12
new thread, total running : 1
read:1 write:0
8107 ops/sec
9253 ops/sec
9258 ops/sec
9290 ops/sec
9088 ops/sec
<---- output truncated --->
new thread, total running : 12
read:1 write:0
read:1 write:0
read:1 write:0
read:1 write:0
9430 ops/sec
9668 ops/sec
9804 ops/sec
9619 ops/sec
9371 ops/sec
root@ubuntu:~# echo "{ recSizeKB: 8, nThreads: 12, fileSizeMB: 400, r: true, mmf: true }" | mongoperf
You will see the following:
mongoperf
use -h for help
parsed options:
{ recSizeKB: 8, nThreads: 12, fileSizeMB: 400, r: true, mmf: true }
creating test file size:400MB ...
testing...
options:{ recSizeKB: 8, nThreads: 12, fileSizeMB: 400, r: true, mmf: true }
wthr 12
new thread, total running : 1
read:1 write:0
2605344 ops/sec
4918429 ops/sec
4720891 ops/sec
4766924 ops/sec
4693762 ops/sec
4810953 ops/sec
4785765 ops/sec
4839164 ops/sec
<---- output truncated --->
new thread, total running : 12
read:1 write:0
read:1 write:0
read:1 write:0
read:1 write:0
4835022 ops/sec
4962848 ops/sec
4945852 ops/sec
4945882 ops/sec
4970441 ops/sec
The mongoperf utility takes parameters in the form of a JSON file. We can either provide this configuration in the form of a file or simply pipe the configuration to mongoperf's stdin. To view the available options of mongoperf simply run mongoperf -h and obtain the following:
usage:
mongoperf < myjsonconfigfile
{
nThreads:<n>, // number of threads (default 1)
fileSizeMB:<n>, // test file size (default 1MB)
sleepMicros:<n>, // pause for sleepMicros/nThreads between each operation (default 0)
mmf:<bool>, // if true do i/o's via memory mapped files (default false)
r:<bool>, // do reads (default false)
w:<bool>, // do writes (default false)
recSizeKB:<n>, // size of each write (default 4KB)
syncDelay:<n> // secs between fsyncs, like --syncdelay in mongod. (default 0/never)
}
In step 1, we pass a handful of parameters to mongoperf. Let's take a look at them:
As we fire up the mongoperf utility, mongoperf first tries to create a roughly 10 GB file on the disk. Once created, it starts one thread and slowly ramps up to 12 (nThreads). You can clearly see the increase in read operations per second as the number of threads increases. Depending on your disk's capabilities, you should expect to reach the maximum IOPS limit pretty soon. This can be observed, in step 2, by running the iostat command and observing the %util column. Once it reaches 100%, you can assume that the disk is peaking at its maximum operating limit.
In step 3, we run the same test but this time with mmf set to true. Here, we are attempting to test the benefits of memory mapping by not reading the data from memory and reading it from the physical disk instead. However, you can see that the performance is not as high as we would expect. In fact, it is drastically lower than the IOPS achieved when reading from disk. The primary reason is that our working file is 10 GB in size, whereas my VM's memory is only 1 GB. As the entire dataset cannot fit in memory, mongoperf has to routinely seek data from the disk. This is more suboptimal when the reads are random, and this can be observed in the output. In step 4, we confirm our theory by running the test again but this time, with a fileSize of 400 MB, which is smaller than the available memory. As you can see, the number of IOPS is drastically higher than the previous run, confirming that it is extremely important that your working dataset fits in your system's memory.
So there you have it, a simple way to test your system's IOPS using the mongoperf utility. Although we only tested read operations, I would strongly urge you to test write as well as read/write operations when testing your systems. Additionally, you should also perform mmf enabled tests to give you an idea of what would be an adequate sized working set that you can hold on a given server.
In this recipe, we will be looking at how to capture queries that have longer execution times. By identifying slow running queries, you can work towards implementing appropriate database indexes or even consider optimizing the application code.
Assuming that you are already running a MongoDB server, we will be importing a dataset of around 100,000 records that are available in the form of a CSV file called chapter_2_mock_data.csv. You can download this file from the Packt website.
mongoimport --headerline --ignoreBlanks --type=csv -d mydb -c mockdata -h localhost chapter_2_mock_data.csv
This will give us the following result:
2017-06-23T08:12:02.122+0530 connected to: localhost
2017-06-23T08:12:03.144+0530 imported 100000 documents
mongo localhost
> use mydb
switched to db mydb
> db.mockdata.count()
100000
> db.setProfilingLevel(1, 20)
{ "was" : 0, "slowms" : 20, "ok" : 1 }
> db.mockdata.find({first_name: "Pam"}).count()
10
> db.system.profile.find().pretty()
The following result is obtained:
{
"op" : "command",
"ns" : "mydb.mockdata",
"command" : {
"count" : "mockdata",
"query" : {
"first_name" : "Pam"
},
"fields" : {
}
},
"keysExamined" : 0,
"docsExamined" : 100000,
"numYield" : 781,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(1564)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(782)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(782)
}
}
},
"responseLength" : 29,
"protocol" : "op_command",
"millis" : 37,
"planSummary" : "COLLSCAN",
"execStats" : {
"stage" : "COUNT",
"nReturned" : 0,
"executionTimeMillisEstimate" : 26,
"works" : 100002,
"advanced" : 0,
"needTime" : 100001,
"needYield" : 0,
"saveState" : 781,
"restoreState" : 781,
"isEOF" : 1,
"invalidates" : 0,
"nCounted" : 10,
"nSkipped" : 0,
"inputStage" : {
"stage" : "COLLSCAN",
"filter" : {
"first_name" : {
"$eq" : "Pam"
}
},
"nReturned" : 10,
"executionTimeMillisEstimate" : 26,
"works" : 100002,
"advanced" : 10,
"needTime" : 99991,
"needYield" : 0,
"saveState" : 781,
"restoreState" : 781,
"isEOF" : 1,
"invalidates" : 0,
"direction" : "forward",
"docsExamined" : 100000
}
},
"ts" : ISODate("2017-07-07T03:26:57.818Z"),
"client" : "192.168.200.1",
"appName" : "MongoDB Shell",
"allUsers" : [ ],
"user" : ""
}
We begin by importing a fairly large dataset using the mongoimport utility, as we did in the Working with indexes recipe in Chapter 2, Understanding and Managing Indexes. Next, in steps 2 and step 3, we start the MongoDB shell and check that our documents were inserted.
In step 4, we enable database profiling by running the db.setProfilingLevel(1, 20) command. Database profiling is a feature available in MongoDB that allows you to log slow queries or operations and profiling information related to the operation. MongoDB allows three profiling levels:
By default, profiling for all databases is set to level 0. This can be confirmed by running the following command:
db.getProfilingStatus()
{ "was" : 0, "slowms" : 100 }
The was field indicates the current profiling level, whereas the slowms field indicates the maximum allowed execution time (in milliseconds) for operations. All operations taking longer than the slowms threshold will be recorded by the database profiler. In our recipe, we set the profiling level to 1, indicating that we want the profiling level to record only slow queries, and the second parameter, 20, indicates that any query taking longer than 20 ms should be recorded.
In step 5, we run a simple query to count the number of documents that have first_name = 'Pam'. As this is not an indexed collection, the server will have to scan through all documents, which hopefully takes more than 20 ms. Once the profiler's threshold is crossed (in our case, 20 ms), the data is stored in the system.profile collection.
In step 6, we query the system.profile collection to find all operations captured by the profiling database. Each document in this collection captures a lot of information regarding the query. A few of them are as follows:
An exhaustive list can be found in MongoDB's official documentation, https://docs.mongodb.com/manual/reference/database-profiler/.
Considering the wealth of information collected by the database profiler, it should be very easy not only to debug slow queries but even monitor the collection to alert on patterns (more on this in Chapter 8, Monitoring MongoDB).
If, due to sheer boredom or just curiosity, you happened to inspect the system.profile collection, you will note that it is a capped collection with a size of 1 MB:
db.system.profile.stats()
The result is as follows:
{
"ns" : "mydb.system.profile",
"size" : 0,
"count" : 0,
"numExtents" : 1,
"storageSize" : 1048576,
"lastExtentSize" : 1048576,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : true,
"max" : NumberLong("9223372036854775807"),
"maxSize" : 1048576,
"nindexes" : 0,
"totalIndexSize" : 0,
"indexSizes" : {
},
"ok" : 1
}
This size may be sufficient for most cases, but if you need to increase the size of this collection, here is how to do it.
First, we disable profiling:
> db.setProfilingLevel(0)
{ "was" : 1, "slowms" : 100, "ok" : 1 }
Next, we drop the system.profile collection and create a new capped collection with a size of 10 MB:
> db.createCollection('system.profile', {capped: true, size: 10485760})
{ "ok" : 1 }
Finally, enable profiling:
> db.setProfilingLevel(1,20)
{ "was" : 0, "slowms" : 100, "ok" : 1 }
That's it! Your system.profile collection's size is now 10 MB.
Amazon Web Services (AWS) provides a variety of instances in their Elastic Compute Cloud (EC2) offerings. With each type of EC2 instance, there are two distinct ways to store data: instance store and Elastic Block Storage (EBS).
Instance store refers to an ephemeral disk that is available as a block device to the instance and is physically present on the host of the instance. By being available on the same host, these disks provide extremely high throughput. However, instance stores are ephemeral and thus provide no guarantees of data retention if an instance is terminated, stopped, or the disk fails. This is clearly not suitable for a single node MongoDB instance, as you might lose your data any time the instance goes down. Not all hope is lost, though. We can use a three or more node replica set and ensure the redundancy of data. For a more robust deployment, we can consider having an extra node in the replica set cluster that uses EBS and has a priority set to zero. This ensures that the node is always in sync with the data and at the same time is not used for serving actual queries.
EBS is network-attached storage that can be used as a block device and can be attached to any AWS instance. EBS volumes provide data persistence and can be reattached to any instance running in the same availability zone of the AWS region. There are various forms of EBS volume available, such as standard general purpose SSDs, Provisioned IOPS (PIOPS), and high throughput magnetic disks. As magnetic disks are more focused on high-throughput data streams mostly performing sequential reads on large files, they are not appropriate for MongoDB.
General purpose SSDs provide submillisecond latencies with a minimum baseline of 100 IOPS. It also provides the ability to burst up to 10,000 IOPS depending on the volume type and has a rather unique 3 IOPS per GB burst bucket system, and I would rather not go into too much detail. PIOPS is another EBS offering, in that you can choose a minimum guaranteed IOPS and are billed accordingly. For most small to medium sized workloads, general purpose SSDs should do the trick. However, when provisioning EBS volumes, we need to keep in mind the network utilization.
As EBS volumes are accessed over the network, they tend to share the same network link as that of the instance. This may not be ideal for a database, as the application traffic to the instance would then be contending with that of the EBS volumes. AWS does provide EBS optimized EC2 instances that use a different network path so that your instance traffic does not affect your disk throughput.

Another significant optimization technique is to use multiple EBS volumes for different parts of your MongoDB data. For instance, we can have separate EBS volumes for the actual data, the database journal, the logs, and the backups. This separation of EBS volumes would ensure that journals, logs, and backup operations do not impinge on the throughput of the actual data.
Lastly, striping volumes over EBS (RAID 0) may prove to increase your overall volume's IOPS capacity. Although the official MongoDB documentation does not recommend using RAID 0 over EBS, I suggest testing your workload against RAID 0 EBS volumes to determine if this suits your needs.
More on EBS can be found here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html.
In this recipe, we will be looking at what a working set is, why is it important, and how to calculate it.
As you probably know, MongoDB relies heavily on caching objects and indexes in RAM. The primary reason to do so is to leverage the speed at which data can be retrieved from RAM as compared to physical disks. Theoretically, a working set is the amount of data accessed by your clients. For performance reasons, it is highly recommended that the server should have sufficient RAM to fit the entire working set while keeping sufficient room for other operations and services running on the same server.

At a high level, the working set comprises the most frequently accessed data and indexes. To get an idea of your database's size, you can run the db.stats() command on the MongoDB shell:
db.stats()
You will get the following result:
{
"db" : "mydb",
"collections" : 5,
"views" : 0,
"objects" : 100009,
"avgObjSize" : 239.83617474427302,
"dataSize" : 23985776,
"storageSize" : 48304128,
"numExtents" : 12,
"indexes" : 2,
"indexSize" : 3270400,
"fileSize" : 67108864,
"nsSizeMB" : 16,
"extentFreeList" : {
"num" : 1,
"totalSize" : 1048576
},
"dataFileVersion" : {
"major" : 4,
"minor" : 22
},
"ok" : 1
}
In the output, dataSize represents the size of entire (uncompressed) data, of the given database and indexSize represents the total size of all indexes in the database. In theory, we want to have enough RAM to fit all the data and indexes. This would result in the fewest seeks from the physical storage and provide an optimal read performance. However, for all practical purposes, this scenario may not be true in all cases. Say, for example, you have 24 GB of data and about 2 GB of indexes; it is recommended that you go with a server that has 32 GB RAM. But what if your application usage is such that you barely access about 4 GB of data? In this case, having an over provisioned server may not be an ideal choice. Similarly, if you have a smaller working set, say 6 GB, and you host it on an server with 8 GB RAM, if the rate at which the working set increases is considerably fast, you may soon run out of memory to fit the working set. My point is, while understanding the size of a working set is an absolutely must, you should not underestimate the importance of monitoring the actual usage of the data.
From version 3.0, MongoDB has provided detailed statistics of the WiredTiger storage engine, especially the cache. Here is the output of a production system that has 16 GB of memory. The approximate size of the working set is 600 MB and the index size is 3 MB:
> db.serverStatus().wiredTiger.cache
{
"tracked dirty bytes in the cache" : 0,
"tracked bytes belonging to internal pages in the cache" : 299485,
"bytes currently in the cache" : 641133907,
"tracked bytes belonging to leaf pages in the cache" : 7515893283,
"maximum bytes configured" : 7516192768,
"tracked bytes belonging to overflow pages in the cache" : 0,
"bytes read into cache" : 583725713,
"bytes written from cache" : 711362477,
<-- output truncated -->
"tracked dirty pages in the cache" : 0,
"pages currently held in the cache" : 3674,
"pages read into cache" : 3784,
"pages written from cache" : 117710
}
By default, WiredTiger uses 50% of RAM (minus 1 GB) or 256 MB for its internal cache. In the preceding output, this can be seen in the value of maximum bytes configured, which is roughly 7 GB on a 16 GB RAM server. This parameter can be changed by setting the wiredTiger.engineConfig.cacheSizeGB parameter in the MongoDB configuration file, or by setting wiredTigerEngineRuntimeConfig.
You should keep an eye on tracked dirty bytes in the cache. If this increases consistently to a high number, you may need to look at changing the cache size. Here's a simple rule of thumb:
tracked dirty bytes in the cache < bytes currently in the cache < maximum bytes configured
In this chapter, we will cover the following topics:
This chapter aims to get you started with MongoDB replica sets. A replica set is essentially a group of MongoDB servers that form a quorum and replicate data across all nodes. Such a setup not only provides a high availability cluster but also allows the distribution of database reads across multiple nodes. A replica consists of a single primary node along with secondary nodes.
The primary node accepts all writes to the database, and each write operation is replicated to the secondary nodes through replication of operation logs, which are also known as oplogs.

A node is determined as primary by way of an election between the nodes in the replica set. Thus, any node within the cluster can become a primary node at any point. It is important to have an odd number of nodes in the replica set to ensure that the election process does not result in a tie. If you choose to have an even number of nodes in the replica set, MongoDB provides a non-resource intensive arbiter server that can perform heartbeats and take part in the election process.
In this chapter, we will be looking at various aspects of setting up and managing replica sets.
In this recipe, we will be setting up the first node of a three node replica set on a single server. In a production setup, this should be on three physically separate servers.
By now, I am assuming you are familiar with installing MongoDB and have it ready. Additionally, we will create individual directories for each MongoDB instance:
mkdir -p /data/server{1,2,3}/{conf,logs,db}
This should create three parent directories: /data/server1, /data/server2, and /data/server3, each containing subdirectories named conf, logs, and db. We will be using this directory format throughout the chapter.
mongod --dbpath /data/server1/db --replSet MyReplicaSet
rs.status()
{
"info" : "run rs.initiate(...) if not yet done for the set",
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94,
"codeName" : "NotYetInitialized"
}
rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "vagrant-ubuntu-trusty-64:27017",
"ok" : 1
}
rs.status()
{
"set" : "MyReplicaSet",
"date" : ISODate("2017-08-20T05:28:26.827Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1503206903, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1503206903, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1503206903, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "vagrant-ubuntu-trusty-64:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 35,
"optime" : {
"ts" : Timestamp(1503206903, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-08-20T05:28:23Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1503206902, 2),
"electionDate" : ISODate("2017-08-20T05:28:22Z"),
"configVersion" : 1,
"self" : true
}
],
"ok" : 1
}
2017-08-20T05:28:16.928+0000 I NETWORK [thread1] connection accepted from 192.168.200.1:55765 #1 (1 connection now open)
2017-08-20T05:28:16.929+0000 I NETWORK [conn1] received client metadata from 192.168.200.1:55765 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.4" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "14.5.0" } }
2017-08-20T05:28:22.625+0000 I COMMAND [conn1] initiate : no configuration specified. Using a default configuration for the set
2017-08-20T05:28:22.625+0000 I COMMAND [conn1] created this configuration for initiation : { _id: "MyReplicaSet", version: 1, members: [ { _id: 0, host: "vagrant-ubuntu-trusty-64:27017" } ] }
2017-08-20T05:28:22.625+0000 I REPL [conn1] replSetInitiate admin command received from client
2017-08-20T05:28:22.625+0000 I REPL [conn1] replSetInitiate config object with 1 members parses ok
2017-08-20T05:28:22.625+0000 I REPL [conn1] ******
2017-08-20T05:28:22.625+0000 I REPL [conn1] creating replication oplog of size: 1628MB...
2017-08-20T05:28:22.628+0000 I STORAGE [conn1] Starting WiredTigerRecordStoreThread local.oplog.rs
2017-08-20T05:28:22.628+0000 I STORAGE [conn1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
2017-08-20T05:28:22.628+0000 I STORAGE [conn1] Scanning the oplog to determine where to place markers for truncation
2017-08-20T05:28:22.634+0000 I REPL [conn1] ******
2017-08-20T05:28:22.646+0000 I INDEX [conn1] build index on: admin.system.version properties: { v: 2, key: { version: 1 }, name: "incompatible_with_version_32", ns: "admin.system.version" }
2017-08-20T05:28:22.646+0000 I INDEX [conn1] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2017-08-20T05:28:22.646+0000 I INDEX [conn1] build index done. scanned 0 total records. 0 secs
2017-08-20T05:28:22.646+0000 I COMMAND [conn1] setting featureCompatibilityVersion to 3.4
2017-08-20T05:28:22.647+0000 I REPL [conn1] New replica set config in use: { _id: "MyReplicaSet", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "vagrant-ubuntu-trusty-64:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 60000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('59991df64db063a571ae8680') } }
2017-08-20T05:28:22.647+0000 I REPL [conn1] This node is vagrant-ubuntu-trusty-64:27017 in the config
2017-08-20T05:28:22.647+0000 I REPL [conn1] transition to STARTUP2
2017-08-20T05:28:22.647+0000 I REPL [conn1] Starting replication storage threads
2017-08-20T05:28:22.647+0000 I REPL [conn1] Starting replication fetcher thread
2017-08-20T05:28:22.647+0000 I REPL [conn1] Starting replication applier thread
2017-08-20T05:28:22.647+0000 I REPL [conn1] Starting replication reporter thread
2017-08-20T05:28:22.647+0000 I REPL [rsSync] transition to RECOVERING
2017-08-20T05:28:22.648+0000 I REPL [rsSync] transition to SECONDARY
2017-08-20T05:28:22.648+0000 I REPL [rsSync] conducting a dry run election to see if we could be elected
2017-08-20T05:28:22.648+0000 I REPL [ReplicationExecutor] dry election run succeeded, running for election
2017-08-20T05:28:22.654+0000 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 1
2017-08-20T05:28:22.654+0000 I REPL [ReplicationExecutor] transition to PRIMARY
2017-08-20T05:28:22.654+0000 I REPL [ReplicationExecutor] Entering primary catch-up mode.
2017-08-20T05:28:22.654+0000 I REPL [ReplicationExecutor] Exited primary catch-up mode.
2017-08-20T05:28:23.649+0000 I REPL [rsSync] transition to primary complete; database writes are now permitted
In step 1, we begin by starting the mongod process with the two parameters. First, we provide the database path with --dbpath, which is quite standard with all mongod processes. Next, we provide the --replSet parameter with the value MyReplicaSet. This parameter starts the mongod process with the explicit instruction that it will be running a replica set node and the unique name for this replica set is MyReplicaSet. MongoDB uses naming constructs to identify a replica set cluster. This can be changed in the future but would require you to shut down all the nodes within the cluster.
In step 2, we open a different Terminal window and start the mongo shell that is connected to our aforementioned node. We check the replica set's status by running the rs.status() command. If you ever happen to work with replica sets, rs.status() will become the most frequent command you will use for eons to come. I would also like to point out that all major replica set operations are available in the rs.<command> format. To view your options, type rs. (with the trailing dot) and press the Tab key twice.
OK, coming back to the output of rs.status(), we can see that MongoDB is indicating that our replica set has not been initialized. We do so by running the rs.initiate() command in step 3. At this point, if you keep pressing the Enter key (without any parameters), you can see your mongo shell show the transition of starting the node as a SECONDARY and then PRIMARY:
rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "vagrant-ubuntu-trusty-64:27017",
"ok" : 1
}
MyReplicaSet:SECONDARY>
MyReplicaSet:PRIMARY>
MyReplicaSet:PRIMARY>
From now on, every time you connect to this node, you will see the replica set name followed by the node's status. Next, we run the rs.status() command again and this time get the detailed status of the replica set's configuration. Let's go through some of the key values of the output:
|
State number |
State |
Decription |
|
0 |
STARTUP |
The node is parsing configuration and is starting up |
|
1 |
PRIMARY |
The node is the primary member of the cluster |
|
2 |
SECONDARY |
The node is a secondary member of the cluster |
|
3 |
RECOVERING |
The node is completing either rollback or resync after starting up |
|
7 |
ARBITER |
The node is an arbiter, it does not store any data |
|
8 |
DOWN |
The node is marked as DOWN usually when it is unreachable |
|
10 |
REMOVED |
The node has been removed from the replica set configuration |
Once we execute the rs.initiate() command, MongoDB attempts to figure out any configuration parameters associated with this replica set (in the form of a config file or mongod command-line flags) and initialized the replica set. In our case, we only mentioned the name of the replica set MyReplicaSet as a mongod parameter.
In step 5, by looking at the mongod process logs, we can observe the various stages the application goes through, while trying to bring up a node in a replica set. The information is pretty verbose, so I will not go into detail.
In this recipe, we will be looking at how to add a node to an existing replica set.
Ensure that you have a single node replica set running as mentioned in the first recipe of this chapter.
mongod --dbpath /data/server2/db --replSet MyReplicaSet --port 27018
mongo mongodb://192.168.200.200:27017
rs.status()['members']
[
{
"_id" : 0,
"name" : "vagrant-ubuntu-trusty-64:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 36,
"optime" : {
"ts" : Timestamp(1503664489, 1),
"t" : NumberLong(3)
},
"optimeDate" : ISODate("2017-08-25T12:34:49Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1503664458, 1),
"electionDate" : ISODate("2017-08-25T12:34:18Z"),
"configVersion" : 1,
"self" : true
}
]
rs.add('192.168.200.200:27018')
{ "_id" : 0,
"name" : "vagrant-ubuntu-trusty-64:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 71,
"optime" : {
"ts" : Timestamp(1503664527, 1),
"t" : NumberLong(3)
},
"optimeDate" : ISODate("2017-08-25T12:35:27Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1503664458, 1),
"electionDate" : ISODate("2017-08-25T12:34:18Z"),
"configVersion" : 2,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.200.200:27018",
"health" : 1,
"state" : 0,
"stateStr" : "STARTUP",
"uptime" : 1,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2017-08- 5T12:35:27.327Z"),
"lastHeartbeatRecv" : ISODate("2017-08-5T12:35:27.378Z"),
"pingMs" : NumberLong(0),
"configVersion" : -2
}
As mentioned earlier, this recipe assumes that you are already running the first (primary) node in your replica set, as show in the previous recipe. In step 1, we start another instance of mongod listening on a different port (27018). I just want to reiterate that as this is a test setup we will be running all instances of mongod on the same server, but in a production setup all replica set members should be running on separate servers.
In step 2, we look at the output of the rs.status() command, more importantly the members array. As of now, although we have started a new instance, the primary replica set node is not aware of its existence. Therefore, the list of members would only show one member. Let's fix this.
In step 3, we run rs.add('192.168.200.200:27018') in the mongo shell, which is connected to the primary node. The rs.add() method is a wrapper around the actual replSetReconfig command in that it adds a node to the members array and reconfigures the replica set. We will look into replica set reconfiguration in future recipes. Next, we look again at the output of the rs.status() command. If you inspect the members array, you will find our second member. If you have run the command soon after rs.add(...), you may be able to see the following:
"_id" : 1,
"name" : "192.168.200.200:27018",
"health" : 1,
"state" : 0,
"stateStr" : "STARTUP",
The "state" : 0 string indicates that the member is parsing its configuration and starting up. If you run the rs.status() command again, this should change to "state" : 2, indicating that the node is a secondary node.
To finish off this recipe, I would like you to start another instance of mongod on port 27019 and add it to the cluster.
In this recipe, we will be looking at how to remove a member from a replica set. If you have done the previous two recipes in this chapter, this should be a breeze.
For this recipe, we will need a three node replica set. If you don’t have one ready, I suggest referring to the first two recipes of this chapter.
rs.status()['members']
[
{
"_id" : 0,
"name" : "vagrant-ubuntu-trusty-64:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 57933,
"optime" : {
"ts" : Timestamp(1503722389, 1),
"t" : NumberLong(5)
},
"optimeDate" : ISODate("2017-08-26T04:39:49Z"),
"electionTime" : Timestamp(1503721808, 1),
"electionDate" : ISODate("2017-08-26T04:30:08Z"),
"configVersion" : 3,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.200.200:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 51609,
"optime" : {
"ts" : Timestamp(1503722389, 1),
"t" : NumberLong(5)
},
"optimeDurable" : {
"ts" : Timestamp(1503722389, 1),
"t" : NumberLong(5)
},
"optimeDate" : ISODate("2017-08-26T04:39:49Z"),
"optimeDurableDate" : ISODate("2017-08-26T04:39:49Z"),
"lastHeartbeat" : ISODate("2017-08-26T04:39:51.239Z"),
"lastHeartbeatRecv" : ISODate("2017-08-
6T04:39:51.240Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "vagrant-ubuntu-trusty-64:27017",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "192.168.200.200:27019",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 84,
"optime" : {
"ts" : Timestamp(1503722389, 1),
"t" : NumberLong(5)
},
"optimeDurable" : {
"ts" : Timestamp(1503722389, 1),
"t" : NumberLong(5)
},
"optimeDate" : ISODate("2017-08-26T04:39:49Z"),
"optimeDurableDate" : ISODate("2017-08-26T04:39:49Z"),
"lastHeartbeat" : ISODate("2017-08-26T04:39:51.240Z"),
"lastHeartbeatRecv" : ISODate("2017-08-
6T04:39:51.307Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.200.200:27018",
"configVersion" : 3
}
]
rs.remove('192.168.200.200:27019')
rs.status()['members']
[
{
"_id" : 0,
"name" : "vagrant-ubuntu-trusty-64:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 57998,
"optime" : {
"ts" : Timestamp(1503722449, 2),
"t" : NumberLong(5)
},
"optimeDate" : ISODate("2017-08-26T04:40:49Z"),
"electionTime" : Timestamp(1503721808, 1),
"electionDate" : ISODate("2017-08-26T04:30:08Z"),
"configVersion" : 4,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.200.200:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 51673,
"optime" : {
"ts" : Timestamp(1503722449, 2),
"t" : NumberLong(5)
},
"optimeDurable" : {
"ts" : Timestamp(1503722449, 2),
"t" : NumberLong(5)
},
"optimeDate" : ISODate("2017-08-26T04:40:49Z"),
"optimeDurableDate" : ISODate("2017-08-26T04:40:49Z"),
"lastHeartbeat" : ISODate("2017-08-26T04:40:55.956Z"),
"lastHeartbeatRecv" : ISODate("2017-08-
6T04:40:55.956Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "vagrant-ubuntu-trusty-64:27017",
"configVersion" : 4
}
]
rs.status()
{
"state" : 10,
"stateStr" : "REMOVED",
"uptime" : 338,
> "optime" : {
"ts" : Timestamp(1503722619, 1),
"t" : NumberLong(5)
},
"optimeDate" : ISODate("2017-08-26T04:43:39Z"),
"ok" : 0,
"errmsg" : "Our replica set config is invalid or we are not a member of it",
"code" : 93,
"codeName" : "InvalidReplicaSetConfig"
}
MyReplicaSet:OTHER>
In step 1, we connect to one of the three replica set members and check the replica set status. We want to ensure two things: one, that the connected node is primary, and that the node that we want to remove is secondary.
Now that we've determined that we are connected to the primary node, in step 2, we remove one node from the replica set. By using rs.remove() with the IP and port of the node, we remove the node from the replica set.
In step 3, we confirm that the node is removed by running rs.status() to get the list of configured nodes in the cluster. Finally, in step 4, we connect to the mongo shell of the node that we just removed. As soon as you log in, you can observe that the console prompt shows OTHER instead of PRIMARY or SECONDARY. Also, the rs.status() command's output confirms that the node is in state 10 (REMOVED), indicating that this node is no longer in the replica set cluster. At this point, I would also like you to go through the mongod logs of this node and observe the sequence of events that occur when we run rs.remove():
2017-08-26T04:40:51.338+0000 I REPL [ReplicationExecutor] Cannot find self in new replica set configuration; I must be removed; NodeNotFound: No host described in new configuration 4 for replica set MyReplicaSet maps to this node
2017-08-26T04:40:51.339+0000 I REPL [ReplicationExecutor] New replica set config in use: { _id: "MyReplicaSet", version: 4, protocolVersion: 1, members: [ { _id: 0, host: "vagrant-ubuntu-trusty-64:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "192.168.200.200:27018", arbiterOnly: false, buildIndexes: true, hidden:
false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 60000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('59991df64db063a571ae8680') } }
2017-08-26T04:40:51.339+0000 I REPL [ReplicationExecutor] This node is not a member of the config
2017-08-26T04:40:51.339+0000 I REPL [ReplicationExecutor] transition to REMOVED
As we ran rs.remove('192.168.200.200:27019') on the primary node, a new configuration was generated. This configuration is sent to all new or existing nodes of the replica set and the relevant changes are implemented. In the log output shown previously, you can see that the replica set node got the new configuration and figured out that it had been removed from the replica set cluster. It then reconfigured itself and transitioned to the REMOVED state.
In MongoDB, nodes within replica sets perform elections to select a primary node. To ensure there is always a majority in the number of nodes, you can add an arbiter to the replica set. An arbiter is a mongod instance that does not store data but is only involved in voting during an election process. This can prove very useful, especially during network partitions that result in conflicting votes.
We can continue on from the previous recipe, in that all we need is a two node replica set.
mkdir -p /data/arbiter/db
mongod --dbpath /data/arbiter/db --replSet MyReplicaSet --port 30000
mongo mongodb://192.168.200.200:27017
rs.addArb('192.168.200.200:30000')
rs.status()['member']
[
{
"_id" : 0,
"name" : "vagrant-ubuntu-trusty-64:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 61635,
"optime" : {
"ts" : Timestamp(1503726090, 1),
"t" : NumberLong(8)
},
"optimeDate" : ISODate("2017-08-26T05:41:30Z"),
"electionTime" : Timestamp(1503725438, 1),
"electionDate" : ISODate("2017-08-26T05:30:38Z"),
"configVersion" : 5,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.200.200:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1214,
"optime" : {
"ts" : Timestamp(1503726090, 1),
"t" : NumberLong(8)
},
"optimeDurable" : {
"ts" : Timestamp(1503726090, 1),
"t" : NumberLong(8)
},
"optimeDate" : ISODate("2017-08-26T05:41:30Z"),
"optimeDurableDate" : ISODate("2017-08-26T05:41:30Z"),
"lastHeartbeat" : ISODate("2017-08-26T05:41:32.024Z"),
"lastHeartbeatRecv" : ISODate("2017-08-26T05:41:30.034Z"),
"pingMs" : NumberLong(0),
"configVersion" : 5
},
{
"_id" : 2,
"name" : "192.168.200.200:30000",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 3,
"lastHeartbeat" : ISODate("2017-08-26T05:41:32.025Z"),
"lastHeartbeatRecv" : ISODate("2017-08-26T05:41:30.034Z"),
"pingMs" : NumberLong(0),
"configVersion" : 5
}
]
We begin by creating the directories for the arbiter process. As mentioned in the beginning of this recipe, an arbiter is nothing but a mongod process that will not store any data. However, it does need to store some metadata about itself and hence a minimal amount of state has to be maintained.

For this purpose, in step 2, we provide the --dbpath parameter with a location to store its data along with an arbitrary port 30000.
In step 3, we connect to the primary node of our replica set, and in step 4, we use the rs.addArb() wrapper to add the new arbiter.
Next, in step 4, we check the status of the replica set; lo and behold, the mighty arbiter is added to the replica set. If you look at the state and stateStr keys, you will see that this member is set state to 7, which confirms it is an arbiter.
In this recipe, we will be looking at how to force a primary node to become secondary and vice versa. Let’s get to it then.
We need a three node replica set, preferably without an arbiter. If you have followed the previous recipes, we should have three mongod instances running on the same instance on three different ports, 27017, 27018, and 27019. In order to keep things simple, we will call them node 1, node 2, and node 3 respectively. Here, we assume that node 1 is primary, whereas node 2 and node 3 are secondary. In the first part of this recipe, we will force node 1 to become secondary. Assuming that node 3 gets elected as primary, in the second part of the recipe, we will try to make node 1 primary.
mongo mongodb://192.168.200.200:27017
rs.stepDown()
rs.isMaster()['ismaster']
mongo mongodb://192.168.200.200:27018
rs.freeze(120)
mongo mongodb://192.168.200.200:27019
rs.stepDown()
rs.freeze(120)
mongo mongodb://192.168.200.200:27017
rs.isMaster()['ismaster']
Forcing a primary node to step down is a fairly straightforward process. As shown in steps 1 and step 2, we just need to log in to the primary node and run the rs.stepDown() command. This forces the node to become secondary and initiates an election in the replica set. Within a few seconds (or less), one of the secondary nodes would be elected as the new primary node. In this recipe, we assume that node 3 got elected as the new primary node.
In step 3, we run another neat little helper, rs.isMaster(), and look for the value of the ismaster key. If its value is set to true, then the current node is a primary. Otherwise, it is a secondary.
For the next part, we work towards converting a particular secondary node to a primary node. This involves a new command called rs.freeze(). This wrapper executes the replSetFreeze command, which prevents the member from seeking election. So, our strategy is to prevent all nodes from seeking election, except for the one that we want to become the primary.
We do exactly the same in step 4. Here, we log in to node 2 and run rs.freeze(120), which prevents it from seeking election for the next 120 seconds.
Next, in step 5, we log in to our newly elected primary, node 3, and make it step down as primary. Finally, in step 6, we run rs.freeze(120), which prevents it from seeking election for the next 120 seconds.
Once done, we confirm that node 1 is now our primary, as expected. All hail Cthulhu!
Up until now, we were performing replica set modifications using helper functions like rs.add(), rs.remove(), and so on. As mentioned earlier, these functions are wrappers which modify the replica set configuration. In this recipe, we will be looking at how to fetch and change the replica set configuration. This can be helpful for various operations like setting priorities, delayed nodes, changing member hostnames, and so on.
For this recipe, you will need a three node replica set.
mongo mongodb://192.168.200.200:27017
conf = rs.conf()
conf['members'].pop(2)
rs.reconfig(conf)
rs.status()['members']
member = {"_id": 2, "host": "192.168.200.200:27019"}
conf['members'].push(member)
rs.reconfig(conf)
rs.status()['members']
Like our previous recipes, replica set configuration operations can only be performed on the primary node. Once we connect to the primary node, we fetch the running configuration of the replica set using rs.conf(). In step 2, we are storing the value of rs.conf() in a variable called conf. The replica set configuration is a JavaScript object, and therefore we can modify it within the mongo shell.
The configuration contains an array of members. So, in order to remove a member, we simply have to remove its entry from the array and reload the configuration with the new values. In step 3, we use the JavaScript native pop() method to remove an entry from the members array. By running conf['members'].pop(2) we are removing the third entry from the array (note that array indexes start from zero). Next, in step 4, we simply run the rs.reconfig() function while providing it the modified configuration. This function reloads the configuration, and in step 5, we can confirm that the node was indeed removed.
In step 6, we create an object that contains the _id and host entry for the node that we wish to add. Next, we append the configuration's members array and add this entry to it. Finally, in step 7, we reload the configuration again and confirm that the node was added back to the replica set.
By now, you would have noticed the priority keyword in the rs.status() output. Replica set members with higher priorities are more likely to be elected as primaries. The value of a priority can range from 0 to 1000, where 0 indicates a non-voting member. A non-voting member functions as a regular member of a replica set but cannot vote in elections nor get elected as a primary.
For this recipe, we need a three node replica set.
mongo mongodb://192.168.200.200:27017
conf = rs.conf()
conf['members'][0].priority = 5
conf['members'][1].priority = 2
conf['members'][2].priority = 2
rs.reconfig(conf)
rs.conf()['members']
Like our previous recipe, we connect to the primary node and fetch the replica set configuration object. Next, in step 3, we modify the value of the priority key of each member in the members array. In step 4, we reconfigure the replica set configuration. Lastly, in step 5, we can confirm that the changes have taken effect by inspecting the output of the rs.conf() command.
So, why would you need to set priorities in the first place? Well, there can be various circumstances when you would need to have control over which member gets elected as a leader. As a simple example, you need to perform sequential maintenance on your replica set members. You can control which node becomes primary if an election kicks in during the maintenance.
Along with priority, we can also set delayed sync and hidden members in replica sets. We will be looking closely at how to set these up later in the book in Chapter 7, Restoring MongoDB from Backups.
In this chapter, we will cover the following recipes:
In the previous chapter, we saw how MongoDB provides high availability using replica sets. Replica sets also allow distributing read queries across slaves, thus providing a fair bit of load distribution across a cluster of nodes. We have also seen that MongoDB performs most optimally if its working datasets can fit in memory with minimal disk operations. However, as databases grow, it becomes harder to provision servers that can effectively fit the entire working set in memory. This is one of the most common scalability problems faced by most growing organizations.
To address this, MongoDB provides sharding of collections. Sharding allows dividing the data into smaller chunks and distributing it across multiple machines.

Unlike replica sets, a sharded MongoDB cluster consists of multiple components.
The config server is used to store metadata about the sharded cluster. It contains details about authorizations, as well as admin and config databases. The metadata stored in the config server is read by mongos and shards, making its role extremely important during the operation of the sharded cluster. Thus, it is highly recommended that the config server is setup as a replica set, with appropriate backup and monitoring configured.
MongoDB's mongos server acts as an interface between the application and the sharded cluster. First, it gathers information (metadata) about the sharded cluster from the config server (described later). Once it has the relevant information about the sharded cluster, it acts as a proxy for all the read and write operations on the cluster. In that, applications only talk to the mongos server and never talks directly to a shard.
More information on how mongos routes queries can be seen at: https://docs.mongodb.com/manual/core/sharded-cluster-query-router/.
The shard server is nothing but a mongod instance and is executed with the --shardsvr switch. The config server delegates chunks to each shard server based on the shard key used for the collection. All queries, executed on the shard, have to originate through the mongos query router. Applications should never directly communicate with a standalone shard.
In order to partition data across multiple shards, MongoDB uses a shard key. This is an immutable key that can be used to identify a document within a sharded collection. Based on boundaries of the shard key, the data is then divided into chunks and spread across multiple shards within a cluster. It is important to note that MongoDB provides sharding at the collection level and a sharded collection can have only one shard key. As shard keys are immutable, we cannot change a key once it is set. It is extremely important to properly plan shard keys before setting up a sharded cluster.
MongoDB provides two sharding strategies—a hashed shard key and ranged shard key.
In hashed shard keys, MongoDB computes and indexes on the hash of the shard key. The data is then evenly distributed across the cluster. So at the expense of a broadcast query, we can achieve even distribution of data across all shards.
A ranged shard key is the default shard key strategy used by MongoDB. In this strategy, MongoDB splits the ranges into chunks and distributes these chunks accordingly. This increases the chance of documents, which have a close proximity to the key value, to be stored on the same shard. In such cases, queries would not be broadcast to all the shards and DB operations would become faster. However, this can also lead to shards getting overloaded on a certain type of keys.
For example, if we do a ranged key on language and keep adding a high number of documents for English speaking users, then the shard holding the key would get all the documents. So there is a good chance that document distribution would be uneven.
So it is extremely important to plan out your sharding strategy far in advance. All aspects of your applications must be thoroughly understood before choosing a shard key strategy.
More information about shard key specifications can be found at: https://docs.mongodb.com/manual/core/sharding-shard-key.
In this recipe, we will look at how to set up a sharded cluster in MongoDB. The cluster includes config servers, shards, and mongos servers. As this is a test setup, we will be running all relevant binaries from a single virtual machine; however, in production, they should be located on separate nodes. Next, we will look at how to enable sharding on a database, followed by sharding an actual collection. Once the sharded cluster is ready, we will import some data to the cluster and execute queries that would give us a glimpse of how the data is partitioned across the shards. Much fun awaits, let's get started!
There are no additional components required besides standard MongoDB binaries. Create the following directories in advance for the config server as well as the shards:
mkdir -p /data/{cfgserver1,shard1,shard2,shard3}/data
mongod --configsvr --dbpath /data/cfgserver1/data --port 27019 --replSet MyConfigRepl
mongo localhost:27019
rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "vagrant-ubuntu-trusty-64:27019",
"ok" : 1
}
rs.status()['configsvr']
true
mongod --shardsvr --dbpath /data/shard1/data --port 27027
mongod --shardsvr --dbpath /data/shard2/data --port 27028
mongod --shardsvr --dbpath /data/shard3/data --port 27029
mongos --configdb MyConfigRepl/192.168.200.200:27019
sh.addShard('192.168.200.200:27027')
{ "shardAdded" : "shard0000", "ok" : 1 }
sh.addShard('192.168.200.200:27028')
{ "shardAdded" : "shard0001", "ok" : 1 }
sh.addShard('192.168.200.200:27029')
{ "shardAdded" : "shard0002", "ok" : 1 }
sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("59c7950c9be3cff24816915a")
}
shards:
{ "_id" : "shard0000", "host" : "192.168.200.200:27027", "state" : 1 }
{ "_id" : "shard0001", "host" : "192.168.200.200:27028", "state" : 1 }
{ "_id" : "shard0002", "host" : "192.168.200.200:27029", "state" : 1 }
<-- output truncated -->
sh.enableSharding('myShardedDB')
{ "ok" : 1 }
sh.status()
--- Sharding Status ---
<--output truncated-->
databases:
{ "_id" : "myShardedDB", "primary" : "shard0001", "partitioned" : true }
sh.shardCollection('myShardedDB.people', {language: 1})
{ "collectionsharded" : "myShardedDB.people", "ok" : 1 }
sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("59c7950c9be3cff24816915a")
}
shards:
{ "_id" : "shard0000", "host" : "192.168.200.200:27027", "state" : 1 }
{ "_id" : "shard0001", "host" : "192.168.200.200:27028", "state" : 1 }
{ "_id" : "shard0002", "host" : "192.168.200.200:27029", "state" : 1 }
<-- output truncated -->
databases:
{ "_id" : "myShardedDB", "primary" : "shard0001", "partitioned" : true }
myShardedDB.people
shard key: { "language" : 1 }
unique: false
balancing: true
chunks:
shard0001 1
{ "language" : { "$minKey" : 1 } } -->> { "language" : { "$maxKey" : 1 } } on : shard0001 Timestamp(1, 0)
mongoimport -h 192.168.200.200 --type csv --headerline -d myShardedDB -c people chapter_2_mock_data.csv
sh.status()
--- Sharding Status ---
--- <output truncated> ---
{ "_id" : "myShardedDB", "primary" : "shard0001", "partitioned" : true }
myShardedDB.people
shard key: { "language" : 1 }
unique: false
balancing: true
chunks:
shard0000 1
shard0001 2
shard0002 1
{ "language" : { "$minKey" : 1 } } -->> { "language" : "" } on : shard0000 Timestamp(2, 0)
{ "language" : "" } -->> { "language" : "Irish Gaelic" } on : shard0002 Timestamp(3, 0)
{ "language" : "Irish Gaelic" } -->> { "language" : "Norwegian" } on : shard0001 Timestamp(3, 1)
{ "language" : "Norwegian" } -->> { "language" : { "$maxKey" : 1 } } on : shard0001 Timestamp(1, 4)
db.people.find({ "language" : "Norwegian" }).explain()
{
"queryPlanner" : {
"mongosPlannerVersion" : 1,
"winningPlan" : {
"stage" : "SINGLE_SHARD",
"shards" : [
{
"shardName" : "shard0001",
"connectionString" : "192.168.200.200:27028",
<--output truncated -->
db.people.find({ "language": {"$in": ["Norwegian", "Arabic"]} }).explain()
{
"queryPlanner" : {
"mongosPlannerVersion" : 1,
"winningPlan" : {
"stage" : "SHARD_MERGE",
"shards" : [
{
"shardName" : "shard0001",
<-- output truncated -->
"shardName" : "shard0002",
<-- output truncated -->
We begin setting up the sharded cluster by starting a single instance of the config server in step 1. As of MongoDB 3.4, it is required that the config server is set up as a replica set. However, for demonstration purposes, we are only going to run one config server in this replica set. The service runs through the mongod binary with the --configsvr parameter and takes in --dbpath as well as --port. As the config server contains metadata, including authorization details, it does make sense to run it as a replica set while ensuring we maintain optimal backups and monitoring. We will cover more on the latter in future chapters.
In step 2, we connect to the config server using mongo shell and initiate the replica set. This is a pretty straightforward operation, as we have seen previously in Chapter 2, Understanding and Managing Indexes. The only point I've highlighted here is that if you run rs.status() on a config server, you should see a key which says 'configsvr' : true. This key should be verified to confirm that the replica set is indeed for your config server.
In step 3, we start three instances of mongod shards, each pointing to a separate --dbpath and --port. Up until this point, the shards are not configured, and hence, they are simply waiting for information from the config server.
In step 4, we start the mongos service and explicitly point it to the config server replica set using the --configdb switch. You will note that the connection string takes the name of the config server replica set as its prefix and is followed by the IP/hostname of the config server; for example,
ReplicaSetName/host1:port1,host2:port2,...hostN:portN
At this point, our sharded cluster not contains a config server and a query router (mongos). We now need to add the shard servers to the cluster. To begin, we connect to the mongos service (in step 5) and use the sh.addShard() function to add each shard:
sh.addShard('192.168.200.200:27027')
{ "shardAdded" : "shard0000", "ok" : 1 }
The string shard0000 is a unique ID of this shard. By running sh.status(), we can confirm that all three shards have been added, each with a unique ID. Additionally, you can also observe that at this point the databases section, in the sh.status() output, is empty (this is expected).
In step 6, we enable sharding for the database myShardedDB by using the command sh.enableSharding('myShardedDB'). If you run the sh.status() command, you will observe that the databases section now shows the following:
{ "_id" : "myShardedDB", "primary" : "shard0001", "partitioned" : true }
Here, MongoDB has assigned shard with ID shard001 as the primary shard for the database (chances are that your setup may have chosen a different ID, and that's okay). There is a good chance that you would have more than one collection in your database. Hence, MongoDB selects a primary shard to store data of non-sharded collections.
Now comes the most important part of this whole exercise—selecting the shard key. As shards store data in chunks, distribution of chunks is determined by the type of sharding key used. As discussed in the previous section of this chapter, by default, MongoDB uses ranged keys. We will use this key type in our example setup as well.
In step 7, we shard the collection named people on the field language. This creates a ranged sharding key on the language field in ascending order. Run sh.status() and view the databases section in the command output.
As we have no data in the collection, you should see there is exactly one chunk, on shard0001, and it covers the entire range of the field, that is, from $minKey to $maxKey.
In step 8, we import some sample data in our newly sharded collection. The data imported is available in the file chapter_2_mock_data.csv, which can be downloaded from the Packt website.
Now that we have the data imported, let's have a look at the status of the shard. In step 9, we run sh.status() and inspect the databases section of the output. You can see that, based on the index created in the field language, MongoDB has partitioned the data in four chunks across all three shards. As we have used a ranged shard key, the partitions are performed on string values of the language key. For example, shard0001 has two chunks: one contains all the documents of the language field, ranging from Irish Gaelic to Norwegian, and the other chunk contains all values from Norwegian to $maxKey (end of index).
With this information in mind, we now know that all records for {language: "Norwegian"} would reside on one shard, that is shard0001. This can be confirmed by running a find() operation, as seen in step 10. The winning plan indicates that the result was obtained from a single shard.
In step 11, we run a similar query, but this time the range spreads across multiple shards. In that, mongos would make a query to two shards (shard0001 and shard0002), and await their response. Then, mongos would merge the results and present it to the application (in our case, mongo shell).
As an exercise, I would suggest you re-create the cluster, but with hashed key type, and observe how the chunks are created.
By now, you should be familiar with the notion of chunks in a MongoDB sharded cluster. In this recipe, we will look at how to split chunks and migrate them across shards.
Ensure you have a sharded cluster ready. If you are reusing the setup from the previous recipe, ensure that you drop the database, as such:
use myShardedDB
db.dropDatabase()
Before we import the mock data, enable sharding:
sh.enableSharding('myShardedDB')
sh.shardCollection('myShardedDB.users', {age: 1})
Finally, we need to import the mock data using the mongoimport utility:
mongoimport -h 192.168.200.200 --type csv --headerline -d myShardedDB -c users chapter_5_mock_data.csv
sh.status()
--- Sharding Status ---
sharding version: {
<-- output truncated -- >
databases:
{ "_id" : "myShardedDB", "primary" : "shard0002", "partitioned" : true }
myShardedDB.users
shard key: { "age" : 1 }
unique: false
balancing: true
chunks:
shard0000 1
shard0001 1
shard0002 2
{ "age" : { "$minKey" : 1 } } -->> { "age" : 18 } on : shard0000 Timestamp(2, 0)
{ "age" : 18 } -->> { "age" : 54 } on : shard0001 Timestamp(3, 0)
{ "age" : 54 } -->> { "age" : 73 } on : shard0002 Timestamp(3, 1)
{ "age" : 73 } -->> { "age" : { "$maxKey" : 1 } } on : shard0002 Timestamp(1, 4)
sh.stopBalancer()
use admin
db.runCommand({split: 'myShardedDB.users', middle: {age: 50}})
sh.status()
--- Sharding Status ---
<-- output truncated -->
databases:
{ "_id" : "myShardedDB", "primary" : "shard0002", "partitioned" : true }
myShardedDB.users
shard key: { "age" : 1 }
unique: false
balancing: true
chunks:
shard0000 1
shard0001 2
shard0002 2
{ "age" : { "$minKey" : 1 } } -->> { "age" : 18 } on : shard0000 Timestamp(2, 0)
{ "age" : 18 } -->> { "age" : 50 } on : shard0001 Timestamp(3, 2)
{ "age" : 50 } -->> { "age" : 54 } on : shard0001 Timestamp(3, 3)
{ "age" : 54 } -->> { "age" : 73 } on : shard0002 Timestamp(3, 1)
{ "age" : 73 } -->> { "age" : { "$maxKey" : 1 } } on : shard0002 Timestamp(1, 4)
sh.moveChunk('myShardedDB.users', {age: 52}, "shard0000")
{ "millis" : 177, "ok" : 1 }
sh.startBalancer()
{ "ok" : 1 }
sh.status()
--- Sharding Status ---
<-- output truncated -->
databases:
{ "_id" : "myShardedDB", "primary" : "shard0002", "partitioned" : true }
myShardedDB.users
shard key: { "age" : 1 }
unique: false
balancing: true
chunks:
shard0000 2
shard0001 1
shard0002 2
{ "age" : { "$minKey" : 1 } } -->> { "age" : 18 } on : shard0000 Timestamp(2, 0)
{ "age" : 18 } -->> { "age" : 50 } on : shard0001 Timestamp(4, 1)
{ "age" : 50 } -->> { "age" : 54 } on : shard0000 Timestamp(4, 0)
{ "age" : 54 } -->> { "age" : 73 } on : shard0002 Timestamp(3, 1)
{ "age" : 73 } -->> { "age" : { "$maxKey" : 1 } } on : shard0002 Timestamp(1, 4)
For this recipe, we've intentionally used a dataset that can help us work with ranged shard keys. In our mock data, we are going to shard on the age field, as it is always going to be numeric.
In step 1, we connect to the mongos service and inspect the shards and their corresponding chunks. In this example output, you can see that the data is partitioned into four chunks, ranging from $minKey to 18, 18 to 54, 54 to 73, and 73 to $maxKey.
You can also observe that shard0002 has two chunks whereas the other two shards have one chunk each.

This distribution of chunks is carried out by MongoDB's balancer service. This is a background process that runs on the config server and performs chunk migrations, ensuring chunks are evenly distributed across the shards. As chunk migration has implications on performance, the balancer ensures that a shard only performs one migration at a time. Additionally, the number of chunks that should be held by a shard is determined by the maximum migration threshold, as described at:
https://docs.mongodb.com/manual/core/sharding-balancer-administration/#sharding-migration-thresholds.
Next, in step 2, we stop the balancer process before we proceed to split the chunks. In step 3, we select the admin database and perform a chunk split on the range, midway to {age: 50}. In step 4, we inspect the sh.status() output and confirm that the chunk has been split. In step 5, we wish to move a chunk from shard0001 to shard0000. In order to do that, we run the sh.moveChunk() command with the collection name, the value of the key in the chunk which we wish to migrate, and the target shard ID. Finally, we start the balancer process and view the sh.status() output. We can observe that the chunk has been migrated to shard0000, as expected.
There are a lot of nuances when working with chunk migrations and it is highly recommended that you go through data partitioning guidelines mentioned at: https://docs.mongodb.com/manual/core/sharding-data-partitioning/.
In this recipe, we will look at how to migrate non-sharded data to another shard.
We need a sharded cluster, preferably the one we created in the previous recipe.
use myShardedDB
sh.status()
--- Sharding Status ---
<-- output truncated -->
databases:
{ "_id" : "myShardedDB", "primary" : "shard0002", "partitioned" : true }
<-- output truncated -->
db.my_col.insert({foo: 'bar'})
db.my_col.find({foo: 'bar'}).explain()['queryPlanner']['winningPlan']
{
"stage" : "SINGLE_SHARD",
"shards" : [
{
"shardName" : "shard0002",
"<--output truncated-->
use admin
mongos> db.runCommand( { movePrimary: 'myShardedDB', to: 'shard0000' } )
{ "primary" : "shard0000:192.168.200.200:27027", "ok" : 1 }
sh.status()
--- Sharding Status ---
<-- output truncated -->
databases:
{ "_id" : "myShardedDB", "primary" : "shard0000", "partitioned" : true }
<-- output truncated -->
use myShardedDB
db.my_col.find({foo: 'bar'}).explain()['queryPlanner']['winningPlan']
{
"stage" : "SINGLE_SHARD",
"shards" : [
{
"shardName" : "shard0000",
<-- output truncated -->
In a sharded cluster, MongoDB selects one node as the primary shard. Any data that is not part of the sharded collection is stored on the primary shard. In step 1, we connect to the mongos service running on the sharded cluster and inspect the shard status by running the sh.status() command. In the given sample output of this command, we can observe that the current primary shard is shard0002. Next, in step 2, we insert a document in the my_col collection of myShardedDB.
As this collection does not exist, a new collection is created on the primary shard (shard0002) and the document is inserted into it. This can be confirmed in step 3, when we run the find() command, followed by the explain() command.
In step 4, we select the admin database and issue the movePrimary command. In that, we provide the database name and the target shard which should become the new primary. Once this command is executed, MongoDB makes shard0000 (in our example) the primary shard and begins copying the data to this new node. In step 5, we can confirm this by running the sh.status() command, as well as running the find() command.
In a production environment, there is a high chance that you will have multiple collections. It is imperative to plan the primary shard such that it can hold the additional (non-sharded) collections along with the chunks of the sharded collections. It is advisable to have the primary shard (or its replica set) run on servers that have additional resources to regular shards, such that the query overhead of non-sharded collections can be accommodated without any negative performance impact.
In this recipe, we will look at how to remove a shard from an existing cluster.
We need a sharded cluster, preferably the one we created in the Managing chunks recipe of this chapter.
sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("59c7950c9be3cff24816915a")
}
shards:
{ "_id" : "shard0000", "host" : "192.168.200.200:27027", "state" : 1 }
{ "_id" : "shard0001", "host" : "192.168.200.200:27028", "state" : 1 }
{ "_id" : "shard0002", "host" : "192.168.200.200:27029", "state" : 1 }
<-- output truncated -->
databases:
shard0000 2
shard0001 1
shard0002 2
{ "age" : { "$minKey" : 1 } } -->> { "age" : 18 } on : shard0000 Timestamp(2, 0)
{ "age" : 18 } -->> { "age" : 50 } on : shard0001 Timestamp(4, 1)
{ "age" : 50 } -->> { "age" : 54 } on : shard0000 Timestamp(4, 0)
{ "age" : 54 } -->> { "age" : 73 } on : shard0002 Timestamp(3, 1)
{ "age" : 73 } -->> { "age" : { "$maxKey" : 1 } } on : shard0002 Timestamp(1, 4)
db.adminCommand({removeShard: 'shard0002'})
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "shard0002",
"note" : "you need to drop or movePrimary these databases",
"dbsToMove" : [ ],
"ok" : 1
}
db.adminCommand({removeShard: 'shard0002'})
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"note" : "you need to drop or movePrimary these databases",
"dbsToMove" : [ ],
"ok" : 1
}
db.adminCommand({removeShard: 'shard0002'})
{
"msg" : "removeshard completed successfully",
"state" : "completed",
"shard" : "shard0002",
"ok" : 1
}
sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("59c7950c9be3cff24816915a")
}
shards:
{ "_id" : "shard0000", "host" : "192.168.200.200:27027", "state" : 1 }
{ "_id" : "shard0001", "host" : "192.168.200.200:27028", "state" : 1 }
<-- output truncated -->
chunks:
shard0000 3
shard0001 2
{ "age" : { "$minKey" : 1 } } -->> { "age" : 18 } on : shard0000 Timestamp(2, 0)
{ "age" : 18 } -->> { "age" : 50 } on : shard0001 Timestamp(4, 1)
{ "age" : 50 } -->> { "age" : 54 } on : shard0000 Timestamp(4, 0)
{ "age" : 54 } -->> { "age" : 73 } on : shard0001 Timestamp(5, 0)
{ "age" : 73 } -->> { "age" : { "$maxKey" : 1 } } on : shard0000 Timestamp(6, 0)
Removing a shard is a fairly straightforward operation. We begin by connecting to the cluster and inspecting the list of shards. In the sample output, we can observe that there are three shards in the cluster, each with one or more chunks assigned to it. In step 2, we switch to the ‘admin’ database and run the removeShard command, followed by the ID of the shard which we wish to remove. As soon as this command is entered, the balancer process immediately starts migrating the chunks to new shards. We can confirm this by the output of the command, in that, MongoDB gives a verbose message draining started successfully. This process would lock the collection and begin migrating the chunks to neighboring shards. In step 3, we run the same command again to view the status of the migration.
Depending on various factors, such as server resources and network bandwidth, the chunk migration can take time. In our case, you can observe that when we run the command for the second time, the status message says draining ongoing, and if we run it again (after a while), the message indicates that removeshard completed successfully. The latter message indicates that the chunks were migrated and the shard was successfully removed. In step 4, we can confirm this by observing the output of sh.status(), which indicates that the shard is no longer listed in the shards section and there are only two shards in the cluster. The two chunks on shard0002 have been migrated to other shards.
In this recipe, we will be looking at MongoDB's shard zones. A zone is essentially a group of shards based on a specific set of tags. Zones can help the distribution of chunks based on tags, across shards. All reads and writes, pertaining to documents within a zone, are performed on shards matching that zone. There can be various scenarios where zone based sharded clusters can prove to be highly useful. For example:
An application that is geographically distributed would require that the frontend, as well as the data store, is close to the user
The application has a multi-tier hardware architecture such that certain records are fetched from a higher tier (low latency) hardware whereas others could be fetched from a low tier (high latency inducing) hardware
We need a sharded cluster, preferably the one we created in the Managing chunk recipe of this chapter.
sh.status()
--- Sharding Status ---
<-- output truncated -->
chunks:
shard0000 2
shard0001 2
shard0002 1
{ "age" : { "$minKey" : 1 } } -->> { "age" : 18 } on : shard0002 Timestamp(7, 0)
{ "age" : 18 } -->> { "age" : 50 } on : shard0001 Timestamp(4, 1)
{ "age" : 50 } -->> { "age" : 54 } on : shard0000 Timestamp(7, 1)
{ "age" : 54 } -->> { "age" : 73 } on : shard0001 Timestamp(5, 0)
{ "age" : 73 } -->> { "age" : { "$maxKey" : 1 } } on : shard0000 Timestamp(6, 0)
sh.addShardTag('shard0000', 'Zone-A')
sh.addShardTag('shard0001', 'Zone-B')
sh.addShardTag('shard0002', 'Zone-C')
sh.addTagRange('myShardedDB.users', {age: MinKey}, {age: 50}, 'Zone-A')
sh.addTagRange('myShardedDB.users', {age: 50}, {age: 54}, 'Zone-B')
sh.addTagRange('myShardedDB.users', {age: 54}, {age: MaxKey}, 'Zone-C')
sh.status()
<-- output truncated -->
chunks:
shard0000 2
shard0001 1
shard0002 2
{ "age" : { "$minKey" : 1 } } -->> { "age" : 18 } on : shard0000 Timestamp(11, 1)
{ "age" : 18 } -->> { "age" : 50 } on : shard0000 Timestamp(8, 0)
{ "age" : 50 } -->> { "age" : 54 } on : shard0001 Timestamp(12, 1)
{ "age" : 54 } -->> { "age" : 73 } on : shard0002 Timestamp(12, 0)
{ "age" : 73 } -->> { "age" : { "$maxKey" : 1 } } on : shard0002
Timestamp(11, 0)
tag: Zone-A { "age" : { "$minKey" : 1 } } -->> { "age" : 50 }
tag: Zone-B { "age" : 50 } -->> { "age" : 54 }
tag: Zone-C { "age" : 54 } -->> { "age" : { "$maxKey" : 1 } }
To begin, in step 1 we connect to the mongos instance of our sharded cluster and view the current distribution of chunks across the shards.
Next, in step 2, we create three tags: Zone-A, Zone-B, and Zone-C, and simultaneously add them to shard0000, shard0001, and shard0002 respectively. At this point, only the tag is created and associated with each individual node. We now need to associate a shard key range with these tags. In step 3, we use the sh.addTagRange() command to associate the range to a tag.

The command's first parameter is the collection name, followed by the minimum and maximum value of the shard key's range. Finally, it takes the name of the tag. The keywords MinKey and MaxKey are reserved MongoDB keywords, used to denote the lowest and the uppermost bounds of a range key.
Finally, in step 4, we rerun the sh.status() command, and by looking at the databases section of the output, we can confirm that the zones were indeed created with appropriate ranges assigned to them.
For more examples, do have a look at MongoDBR17's official documentation at: https://docs.mongodb.com/manual/core/zone-sharding/.
This brings us to the end of the chapter. I will be extensively covering backup and monitoring of MongoDB shards in later chapters.
In this chapter, we will cover the following recipes:
Two core traits of a well designed database system are consistency of data and ability to restore data from a known good state. In MongoDB, the former part is mostly managed by the underlying server software. For example, using features like write concerns can also help ensure that writes received by the cluster are acknowledged under the predefined conditions. However, the ability to restore data is, for the most part, still a system that heavily relies on backup and restore strategies designed by database administrators. In this chapter, we will be looking at various tools and techniques that would hopefully help you in designing your optimal backup strategy.
In this recipe, we will be looking at how to take MongoDB backups using mongodump utility. This utility is a part of the MongoDB binary package and is usually available in the bin directory of the binary package. If you have installed MongoDB using a package management tool, like Ubuntu's apt or Red Hats yum, then the utility can simply be invoked by typing mongodump in the console.
You need a single node MongoDB installation, preferably with some data in it. Refer to the recipe Creating an index in Chapter 2, Understanding and Managing Indexes, to learn how to import sample data into a MongoDB instance.
mkdir /backups/
cd /backups/
mongodump
2017-10-04T03:26:34.251+0000 writing mydb.mockdata
2017-10-04T03:26:34.459+0000 done dumping mydb.mockdata (100000 documents)
ls -al dump/
drwxr-xr-x 2 root root 4096 Oct 4 03:29 mydb
ls -ahl dump/mydb/
-rw-r--r-- 1 root root 13M Oct 4 03:27 mockdata.bson
-rw-r--r-- 1 root root 85 Oct 4 03:27 mockdata.metadata.json
rm -rf /backups/dump
mongodump --gzip --out /backups/dump
2017-10-04T03:32:52.203+0000 writing mydb.mockdata
2017-10-04T03:32:53.036+0000 done dumping mydb.mockdata (100000 documents)
ls -alh /backups/dump/mydb/
-rw-r--r-- 1 root root 2.8M Oct 4 03:44 mockdata.bson.gz
-rw-r--r-- 1 root root 100 Oct 4 03:44 mockdata.metadata.json.gz
mongodump --gzip --archive=/backups/mydb.archive
2017-10-04T03:45:54.976+0000 writing mydb.mockdata to archive '/backups/mydb.archive'
2017-10-04T03:45:55.705+0000 done dumping mydb.mockdata (100000 documents)
ls -alh /backups
-rw-r--r-- 1 root root 2.8M Oct 4 03:45 mydb.archive
The mongodump utility primarily connects to the mongod or mongos instance and dumps all data (except the local database) in BSON format.
In step 1, we create a common directory to store our backups and in step 2, we switch to that directory and execute the mongodump utility without any parameters.
By default, if no parameters are specified to the utility, mongodump creates a directory called dump in the current working directory. Additionally, for every database on the server, it creates a subdirectory with the name of the database and, within this subdirectory, it creates a BSON file for every collection within the database. We can observe this by inspecting the directories, as shown in step 3.
In addition to BSON files, mongodump also creates metadata files corresponding to the collection. If you inspect the file /backups/dump/mydb/mockdata.metadata.json, you should see the additional metadata (in this case, the index details) of this collection:
{"options":{},"indexes":[{"v":1,"key":{"_id":1},"name":"_id_","ns":"mydb.mockdata"}]}
Next, in step 4, we empty the directory and in step 5, we execute the mongodump utility with two command line parameters. The --gzip option enables compression and --out allows us to explicitly mention the path where the dump directory should be created.
In step 6, by examining the directory /backups/dump/mydb/, we can observe that the files are now compressed and have a .gz extension. If you compare the file size from the previous output, you can observe a substantial reduction in size. It goes without saying that compression of backups can greatly reduce the disk utilization of backups, as well as reducing transfer times when copying backups to remote location. However, if the dataset is too large, compression will incur a CPU overhead in addition to disk I/O. It is highly recommended to keep this in mind when you are planning your backup strategy. You may want to have a dedicated disk to store your backups.
Lastly, in step 7, we use the --archive flag, followed by the absolute filename of the archive. Starting with MongoDB Version 3.2, the mongodump utility supports creation of single file archives instead of directory based backups (as seen earlier).
As per the official documentation, archives have a slight benefit over directories, in that the restoration process is faster due to the contiguous nature of data in the file. Additionally, by adding the --gzip flag with --archive, we can ensure that the resulting archive file is compressed. This can be observed by examining the /backups/ directory in step 8.
The mongodump utility also supports printing the data in standard out (STDOUT) device. This allows additional possibilities, like using an alternate compression utility or transferring data to a remote server.
For example, the following command would create an archive and pipe it to xz compression utility. The latter would then create a xz compressed file.
mongodump --archive=- | xz --stdout > mydb.archive.xz
Another use case would be to transfer the archive over the wire to a remote location. An overly simple example is one in which you wish to take a backup on Server-A and copy it over to Server-B. You can run the following command on Server-A:
mongodump --gzip --archive=- | xz --stdout | ssh user@Server-B 'cat > /backups/mydb.archive.xz’
In this recipe, we will be using mongodump utility to take backups for a specific database and/or collection.
You need a single node MongoDB installation, preferably with some data in it. Refer to the recipe Creating an index in Chapter 2, Understanding and Managing Indexes, for instructions on how to import sample data into a MongoDB instance.
use mydb
show collections
mockdata
system.indexes
system.profile
db.tmpcol.insert({foo:1})
mongodump --gzip -d mydb
2017-10-04T12:05:24.352+0000 writing mydb.mockdata
2017-10-04T12:05:24.353+0000 writing mydb.tmpcol
2017-10-04T12:05:24.361+0000 done dumping mydb.tmpcol (1 document)
2017-10-04T12:05:25.098+0000 done dumping mydb.mockdata (100000 documents)
mongodump --gzip -d mydb -c mockdata
2017-10-04T11:56:37.357+0000 writing mydb.mockdata
2017-10-04T11:56:38.082+0000 done dumping mydb.mockdata (100000 documents)
mongodump --gzip -d mydb --excludeCollection=mockdata
2017-10-04T12:06:03.787+0000 writing mydb.tmpcol
2017-10-04T12:06:03.788+0000 done dumping mydb.tmpcol (1 document)
This recipe should be pretty straightforward to understand. Assuming you have loaded the sample data as mentioned in the Getting ready section, you should have a database called mydb with one collection called mockdata. In step 1, we create another collection in mydb, called tmpcol and insert a document in it.
Next, in step 2, we execute the mongodump utility with the -d parameter, followed by the name of the database we wish to backup. In step 3, we take a more specific backup by adding an additional flag -c, followed by the name of the collection which we wish to backup.
Finally, in step 4, we use --excludeCollection to backup the entire database, except for the collection mentioned with this parameter. This can be a handy switch when we want to exclude a particular collection when taking a backup.
Additionally, mongodump also comes with another flag, --excludeCollectionsWithPrefix. This flag can be used to exclude all collections which start with the string specified in its value.
In the previous recipe, we saw how to backup certain databases and their collections. In this recipe, we will look at how to use the mongodump utility to backup specific documents within a collection.
You need a single node MongoDB installation, preferably with some data in it. Refer to the recipe Creating an index in Chapter 2, Understanding and Managing Indexes, on how to import sample data into a MongoDB instance.
use mydb
db.mockdata.count({language:"Thai"})
892
{language: 'Thai'}
mongodump -d mydb -c mockdata --queryFile query.json
2017-10-04T12:17:28.559+0000 writing mydb.mockdata
2017-10-04T12:17:28.596+0000 done dumping mydb.mockdata (892 documents)
By now, these steps should be quite verbose in explaining what they do. We begin by examining how many records are present in our mockdata collection for {language:"Thai"}. Next, we create a new JSON file called query.json and add the query line in it. In step 3, we execute the mongodump utility and specify the --queryFile parameter with the name of the file containing the query. As a query always has to run against a collection, we have to specify the database and the collection name as the values for the -d and -c flags, respectively.
Instead of using a query file, you can also provide the query inline by using the --query parameter. Here, the command, used in step 3, would look like:
mongodump -d mydb -c mockdata --query "{language: 'Thai'}"
In this recipe, we will be using bsondump tool to examine the BSON files created by mongodump utility.
You need a single node MongoDB installation, preferably with some data in it. Refer to the recipe Creating an index in Chapter 2, Understanding and Managing Indexes, for instructions on how to import sample data into a MongoDB instance.
mongodump --gzip -d mydb -c mockdata --out /backups/dump
2017-10-04T12:50:37.000+0000 writing mydb.mockdata
2017-10-04T12:50:37.737+0000 done dumping mydb.mockdata (100000 documents)
ls -al dump/mydb/
-rw-r--r-- 1 root root 2836312 Oct 4 12:50 mockdata.bson.gz
-rw-r--r-- 1 root root 100 Oct 4 12:50 mockdata.metadata.json.gz
zcat /backups/dump/mydb/mockdata.bson.gz | bsondump --pretty
<-- output truncated -->
{
"_id": {
"$oid": "59913879d574be2148cf8294"
},
"first_name": "Helenka",
"last_name": "Gorries",
"gender": "Female",
"city": "Colcabamba",
"language": "Quechua"
}
{
"_id": {
"$oid": "59913879d574be2148cf8296"
},
"first_name": "Derward",
"last_name": "Sabbatier",
"gender": "Male",
"city": "Tindog",
"language": "Maltese"
}
<-- output truncated -->
The bsondump utility is a handy tool bundled with all MongoDB packages. It is very useful to parse the binary BSON files and output them in human readable format. In step 1, we begin by taking a backup of the mockdata collection. As expected, in step 2, we can see that the collection's data was dumped into two files, namely, mockdata.bson.gz and mockdata.metadata.json.gz. The former contains the actual data in BSON format while the latter contains metadata (index information) of the collection.
At this stage, I would like to point out two limitations of the bsondump tool. First, it cannot read a compressed file, not even .gzip. Second, it cannot read a mongodump archive either. It can literally only read raw BSON files. So, in step 3, in order to conform to its requirements, we will use Linux's zcat utility, which prints a .gzip file, and pipe its output to bsondump. The additional --pretty flag is used to prettify the JSON output, so that it is easier to read. If you plan to run it through any scripts, you can omit this flag.
MongoDB replica sets maintain operations log using a capped collection called oplog. The oplog is what is shared between the replica set nodes to maintain consistency. In this recipe, we will look at how to create point in time backups of replica sets using oplog.
You need a three node MongoDB replica set installation, preferably with some data in it. Refer to the recipes Initializing a new replica set, in Chapter 4, High Availability with Replication, for instructions on how to create a replica set, and Creating an index in Chapter 2, Understanding and Managing Indexes, for instructions on how to import sample data into a MongoDB instance.
for(var x=0; x<100000; x++){ db.mycol.insert({age:(Math.round(Math.random()*100)%20) }) }
mongodump --oplog --out /backups/dump/
2017-10-04T15:02:53.921+0000 writing admin.system.version
2017-10-04T15:02:53.928+0000 done dumping admin.system.version (1 document)
2017-10-04T15:02:53.929+0000 writing mydb.mockdata
2017-10-04T15:02:53.934+0000 writing mydb.mycol
2017-10-04T15:02:53.986+0000 done dumping mydb.mycol (5922 documents)
2017-10-04T15:02:54.662+0000 done dumping mydb.mockdata (100000 documents)
2017-10-04T15:02:54.667+0000 writing captured oplog
2017-10-04T15:02:55.025+0000 dumped 941 oplog entries
ls -al /backups/dump/
drwxr-xr-x 2 root root 4096 Oct 4 14:37 admin
drwxr-xr-x 2 root root 4096 Oct 4 14:37 mydb
-rw-r--r-- 1 root root 106333 Oct 4 15:02 oplog.bson
bsondump /backups/dump/oplog.bson
<-- output truncated -->
{"ts":{"$timestamp":{"t":1507129375,"i":5}},"t":{"$numberLong":"104"},"h":{"$numberLong":"-6696408951828346681"},"v":2,"op":"i","ns":"mydb.mycol","o":{"_id":{"$oid":"59d4f81f1cfc955658fc1275"},"age":15.0}}
{"ts":{"$timestamp":{"t":1507129375,"i":6}},"t":{"$numberLong":"104"},"h":{"$numberLong":"-2383386998191081659"},"v":2,"op":"i","ns":"mydb.mycol","o":{"_id":{"$oid":"59d4f81f1cfc955658fc1276"},"age":9.0}}
2017-10-04T15:05:10.721+0000 941 objects found
The point of this recipe is to demonstrate a scenario where a database backup is being taken while there are operations still being performed on the server. In order to simulate this, we begin by opening two terminal windows. In step 1, we open a mongo shell (in the first Terminal window) and run a simple JavaScript code snippet, which would insert about 100,000 documents. As soon as you initiate this code, immediately switch to the other Terminal window and execute the mongodump utility as shown in step 2. We explicitly mention the --oplog flag to ensure we capture the point in time oplog entries of operations that are being performed during the backup operation. This will create a file called oplog.bson in the top level directory, as shown in the output in step 3.
Finally, if you feel industrious, you can explore the oplog.bson file using the bsondump utility, as shown in step 4.
In this recipe, we will be looking at mongoexport, a utility provided to export MongoDB data in JSON and CSV format.
You need a single node MongoDB installation, preferably with some data in it. Refer to the recipe Creating an index in Chapter 2, Understanding and Managing Indexes, for instructions on how to import sample data into a MongoDB instance.
mongoexport -d mydb -c mockdata --fields first_name,last_name,language --query '{language: "English"}' --type csv > my_data.csv
2017-10-04T16:15:51.939+0000 connected to: localhost
2017-10-04T16:15:51.981+0000 exported 866 records
first_name,last_name,language
Gareth,Mott,English
Pace,Goodram,English
Valaree,Dickinson,English
Nickola,Messer,English
Ellene,Wardlaw,English
Caryn,Petruk,English
Alta,Major,English
Sonya,Ritchman,English
Howie,MacHostie,English
The mongoexport utility is not really the ideal tool to use for performing backups on production databases. However, it can be used as a very nifty tool to quickly extract data for generating reports or (if you are confident) maybe a quick backup in JSON or CSV format.
In step 1, we execute the mongoexport utility with some options similar to mongodump utility. The flags are -d for the name of the database, -c for the name of the collection, --fields for the name of the fields which should be returned, --query for the query to perform on the collection, and lastly, --type for the output type. We pipe the output to a file called my_data.csv and, upon its inspection, we can observe that the tool has created a comma-separated CSV file with its first line containing the name of the fields representing the output.
In this recipe, we will be looking at how to take a backup of a sharded MongoDB cluster. We will be looking at how to backup the config server and the relevant shards which contain the actual data.
You will need a sharded MongoDB cluster, with a minimum of a config server replica set (CSRS) and one shard. Refer to the recipe Setting up and configuring a sharded cluster in Chapter 5, High Scalability with Sharding, on how to create a sharded cluster.
use config
sh.stopBalancer()
mongodump -h localhost -p 27019 -d config --out /backups/configbkp
mongodump -h localhost -p 27027 -d myShardedDB --out /backups/shard1bkp
use config
sh.startBalancer()
Taking backups of a sharded cluster is a bit nuanced, as it involves ensuring certain steps are considered before planning a backup strategy. For starters, we need to stop the balancer. We begin by connecting to mongos server and stopping the balancer, as shown in step 1. Next, in step 2, we use mongodump to take a backup of the config server. In a production environment, prior to executing a mongodump, it is suggested to connect to the config server replica set secondary, issue a db.fsyncLock() and rs.slaveOk(). This ensures that there are no writes to the config database, of the given replica set and we are able to read from it. Next, in step 3, we connect to each shard (our example only has one shard member) and execute the mongodump utility to take the database backup from that shard. Lastly, we connect back to the mongos instance and start the balancer by running the sh.startBalancer() command.
In this chapter, we will cover the following recipes:
In the previous chapter, we saw various tools and techniques used to back up MongoDB data. Continuing from that trail, this chapter should help you understand how to restore data from a given backup. For most restoration use cases, we will use the mongorestore utility that comes bundled with the MongoDB installation. Additionally, we will cover different types of MongoDB setups, ranging from standalone servers and replica sets to sharded databases. So, let's get started!
In this recipe, we will be looking at how to use the mongorestore tool to restore a previously generated backup.
You need a single-node MongoDB installation, preferably with some data in it. Refer to the Creating an index recipe in Chapter 2, Understanding and Managing Indexes, on how to import sample data into a MongoDB instance.
for(var x=0; x<1000; x++){ db.mycol.insert({age:(Math.round(Math.random()*100)%20) }) }
mongodump --out /backups/dump/
2017-10-06T08:02:38.082+0000 writing admin.system.version
2017-10-06T08:02:38.083+0000 done dumping admin.system.version (1 document)
2017-10-06T08:02:38.083+0000 writing mydb.mockdata to
2017-10-06T08:02:38.084+0000 writing mydb.mycol to
2017-10-06T08:02:38.095+0000 done dumping mydb.mycol (1000 documents)
2017-10-06T08:02:38.279+0000 done dumping mydb.mockdata (100000 documents
mongorestore --drop --dir /backups/dump/
2017-10-06T08:29:10.775+0000 preparing collections to restore from
2017-10-06T08:29:10.782+0000 reading metadata for mydb.mockdata from /backups/dump/mydb/mockdata.metadata.json
2017-10-06T08:29:10.786+0000 reading metadata for mydb.mycol from /backups/dump/mydb/mycol.metadata.json
2017-10-06T08:29:10.795+0000 restoring mydb.mycol from /backups/dump/mydb/mycol.bson
2017-10-06T08:29:10.809+0000 restoring mydb.mockdata from /backups/dump/mydb/mockdata.bson
2017-10-06T08:29:10.861+0000 no indexes to restore
2017-10-06T08:29:10.861+0000 finished restoring mydb.mycol (1000 documents)
2017-10-06T08:29:13.772+0000 [#################.......] mydb.mockdata 8.62MB/12.2MB (71.0%)
2017-10-06T08:29:15.211+0000 [########################] mydb.mockdata 12.2MB/12.2MB (100.0%)
2017-10-06T08:29:15.211+0000 no indexes to restore
2017-10-06T08:29:15.211+0000 finished restoring mydb.mockdata (100000 documents)
2017-10-06T08:29:15.211+0000 done
We begin simply by adding another collection to the existing mydb database, just to increase the sample dataset for verbosity. In step 2, we take a database backup by executing the mongodump utility with no special parameters other than the output directory. Next, in step 3, we execute the mongorestore utility with two parameters. The --drop parameter is used to drop the collection before importing. We are using it primarily to avoid duplicate key errors, which would occur because the backup contains the _id fields. As the _id field is unique, the database restoration process would yield duplicate key errors. In a real-world scenario, you should be extremely careful when adding a --drop collection flag. The mongorestore utility is also provided with the --dir flag, which points to the directory containing the backup files. The utility will go through each subdirectory within the supplied directory, read the files present in that directory, and restore the respective collections. Any indexes taken during the backup are also restored. If you had provided a --gzip option when executing mongodump, you can provide the same flag with the mongorestore utility, allowing it to recognize the backup as a set of GZIP compressed files. Likewise, if you are restoring from an archive, you can use the --archive file instead of the --dir parameter.
Last but not least, if you are ever in doubt, you can always use the --dryRun option which provides extra verbose output like this:
mongorestore --dryRun -vvv --dir /backups/dump
The preceding command executes the entire process of database restoration without actually restoring the data. This can prove extremely useful when you wish to double-check the integrity of your data and the restoration parameters.
In this recipe, we will explore the options of mongorestore utility that allow us to restore backups for a specific database or collection. We will also look at how to exclude certain collections or databases during restoration.
You need a single-node MongoDB installation, preferably with some data in it. Refer to the Creating an index recipe in Chapter 2, Understanding and Managing Indexes, on how to import sample data into a MongoDB instance.
for(var x=0; x<1000; x++){ db.mycol.insert({age:(Math.round(Math.random()*100)%20) }) }
mongodump --out /backups/dump/
2017-10-06T08:02:38.082+0000 writing admin.system.version
2017-10-06T08:02:38.083+0000 done dumping admin.system.version (1 document)
2017-10-06T08:02:38.083+0000 writing mydb.mockdata to
2017-10-06T08:02:38.084+0000 writing mydb.mycol to
2017-10-06T08:02:38.095+0000 done dumping mydb.mycol (1000 documents)
2017-10-06T08:02:38.279+0000 done dumping mydb.mockdata (100000 documents
mongorestore --drop -v --dir /backups/dump --nsInclude 'mydb.mockdata'
2017-10-08T10:45:43.108+0000 using --dir flag instead of arguments
2017-10-08T10:45:43.113+0000 using write concern: w='majority', j=false, fsync=false, wtimeout=0
2017-10-08T10:45:43.114+0000 preparing collections to restore from
2017-10-08T10:45:43.115+0000 found collection mydb.mockdata bson to restore to mydb.mockdata
2017-10-08T10:45:43.116+0000 found collection metadata from mydb.mockdata to restore to mydb.mockdata
2017-10-08T10:45:43.117+0000 dropping collection mydb.mockdata before restoring
2017-10-08T10:45:43.121+0000 reading metadata for mydb.mockdata from /backups/dump/mydb/mockdata.metadata.json
2017-10-08T10:45:43.126+0000 creating collection mydb.mockdata using options from metadata
2017-10-08T10:45:43.134+0000 restoring mydb.mockdata from /backups/dump/mydb/mockdata.bson
2017-10-08T10:45:46.110+0000 [##################......] mydb.mockdata 9.23MB/12.2MB (76.0%)
2017-10-08T10:45:47.083+0000 [########################] mydb.mockdata 12.2MB/12.2MB (100.0%)
2017-10-08T10:45:47.083+0000 no indexes to restore
2017-10-08T10:45:47.083+0000 finished restoring mydb.mockdata (100000 documents)
2017-10-08T10:45:47.083+0000 done
mongorestore -v --drop --dir /backups/dump --nsExclude 'mydb.mockdata'
2017-10-08T10:47:00.053+0000 using --dir flag instead of arguments
2017-10-08T10:47:00.059+0000 using write concern: w='majority', j=false, fsync=false, wtimeout=0
2017-10-08T10:47:00.060+0000 preparing collections to restore from
2017-10-08T10:47:00.061+0000 found collection admin.system.version bson to restore to admin.system.version
2017-10-08T10:47:00.062+0000 found collection metadata from admin.system.version to restore to admin.system.version
2017-10-08T10:47:00.063+0000 found collection mydb.mycol bson to restore to mydb.mycol
2017-10-08T10:47:00.064+0000 found collection metadata from mydb.mycol to restore to mydb.mycol
2017-10-08T10:47:00.065+0000 dropping collection mydb.mycol before restoring
2017-10-08T10:47:00.069+0000 reading metadata for mydb.mycol from /backups/dump/mydb/mycol.metadata.json
2017-10-08T10:47:00.076+0000 creating collection mydb.mycol using options from metadata
2017-10-08T10:47:00.082+0000 restoring mydb.mycol from /backups/dump/mydb/mycol.bson
2017-10-08T10:47:00.120+0000 no indexes to restore
2017-10-08T10:47:00.120+0000 finished restoring mydb.mycol (1000 documents)
2017-10-08T10:47:00.120+0000 done
We begin simply by adding another collection to the existing mydb database, just to increase the sample dataset for verbosity. In step 2, we take a database backup by executing the mongodump utility with no special parameters other than the output directory.
Now comes the interesting bit. If you recall from Chapter 6, Managing MongoDB Backups, you saw the use of the mongodump utility with the -d and -c flags, which correspond to database and collection, respectively. As of MongoDB 3.4, the mongorestore utility uses the namespace flags instead. A namespace is a dot (.) separated tuple of two words, first being the database name and the other, the collection or index name. For example, mydb.mockdata would denote the namespace for the mockdata collection in the mydb database. This nomenclature allows a slightly more usable denotation of a parent-child type of relationship between a database and its subobjects. In step 3, we use the --nsInclude flag to explicitly specify the namespace that we wish to restore. In our case, it is mydb.mockdata as the namespace that denotes the database and collection names.
In step 4, we use the --nsExclude flag to explicitly exclude a given collection while restoring data.
In this recipe, we will look at how to use the mongorestore utility to restore data from one collection to another.
You need a single-node MongoDB installation, preferably with some data in it. Refer to the Creating an index recipe in Chapter 2, Understanding and Managing Indexes, on how to import sample data into a MongoDB instance.
for(var x=0; x<1000; x++){ db.mycol.insert({age:(Math.round(Math.random()*100)%20) }) }
mongodump --out /backups/dump/
2017-10-06T08:02:38.082+0000 writing admin.system.version to
2017-10-06T08:02:38.083+0000 done dumping admin.system.version (1 document)
2017-10-06T08:02:38.083+0000 writing mydb.mockdata to
2017-10-06T08:02:38.084+0000 writing mydb.mycol to
2017-10-06T08:02:38.095+0000 done dumping mydb.mycol (1000 documents)
2017-10-06T08:02:38.279+0000 done dumping mydb.mockdata (100000 documents)
mongorestore -v --dir /backups/dump --nsInclude 'mydb.mockdata' --nsFrom 'mydb.mockdata' --nsTo 'newdb.mockdata'
2017-10-08T12:25:18.460+0000 using --dir flag instead of arguments
2017-10-08T12:25:18.466+0000 using write concern: w='majority', j=false, fsync=false, wtimeout=0
2017-10-08T12:25:18.467+0000 preparing collections to restore from
2017-10-08T12:25:18.468+0000 found collection mydb.mockdata bson to restore to newdb.mockdata
2017-10-08T12:25:18.468+0000 found collection metadata from mydb.mockdata to restore to newdb.mockdata
2017-10-08T12:25:18.469+0000 reading metadata for newdb.mockdata from /backups/dump/mydb/mockdata.metadata.json
2017-10-08T12:25:18.470+0000 creating collection newdb.mockdata using options from metadata
2017-10-08T12:25:18.476+0000 restoring newdb.mockdata from /backups/dump/mydb/mockdata.bson
2017-10-08T12:25:21.462+0000 [################........] newdb.mockdata 8.38MB/12.2MB (69.0%)
2017-10-08T12:25:22.856+0000 [########################] newdb.mockdata 12.2MB/12.2MB (100.0%)
2017-10-08T12:25:22.857+0000 no indexes to restore
2017-10-08T12:25:22.858+0000 finished restoring newdb.mockdata (100000 documents)
2017-10-08T12:25:22.859+0000 done
mongorestore -v --dir /backups/dump --nsInclude 'mydb.mockdata' --nsFrom 'mydb.$colname$' --nsTo 'mydb.copyof_$colname$'
2017-10-08T12:26:29.037+0000 using --dir flag instead of arguments
2017-10-08T12:26:29.048+0000 using write concern: w='majority', j=false, fsync=false, wtimeout=0
2017-10-08T12:26:29.049+0000 preparing collections to restore from
2017-10-08T12:26:29.050+0000 found collection mydb.mockdata bson to restore to mydb.copyof_mockdata
2017-10-08T12:26:29.051+0000 found collection metadata from mydb.mockdata to restore to mydb.copyof_mockdata
2017-10-08T12:26:29.053+0000 reading metadata for mydb.copyof_mockdata from /backups/dump/mydb/mockdata.metadata.json
2017-10-08T12:26:29.054+0000 creating collection mydb.copyof_mockdata using options from metadata
2017-10-08T12:26:29.070+0000 restoring mydb.copyof_mockdata from /backups/dump/mydb/mockdata.bson
2017-10-08T12:26:32.045+0000 [##################......] mydb.copyof_mockdata 9.23MB/12.2MB (76.0%)
2017-10-08T12:26:32.725+0000 [########################] mydb.copyof_mockdata 12.2MB/12.2MB (100.0%)
2017-10-08T12:26:32.725+0000 no indexes to restore
2017-10-08T12:26:32.725+0000 finished restoring mydb.copyof_mockdata (100000 documents)
2017-10-08T12:26:32.725+0000 done
We begin simply by adding another collection to the existing mydb database, just to increase the sample dataset for verbosity. In step 2, we take a database backup by executing the mongodump utility with no special parameters other than the output directory.
In step 3, we use two new parameters-- --nsFrom and --nsTo. The value of the --nsFrom parameter indicates the namespace from which the restoration is supposed to be renamed, and the value of --nsTo indicates the target namespace. In our example, we are restoring the backup of data from mydb.mockdata to a new database newdb.mockdata.
You may have noticed that I am still using the --nsInclude flag with the mydb.mockdata value. This ensures that the mongorestore utility is only considering the mydb.mockdata namespace and no other databases or collections. In the absence of these parameters, the mongodrestore utility will restore all the backups in the --dir path. So to restrict its scope to only the relevant database and collection, we have to use the --nsInclude flag.
Finally, in step 4, we use the utility's feature of allowing variable substitution in renaming namespaces. Let's break down the flags and their values used in this step:
| Flag | Value | Effect |
| --nsInclude | mydb.mockdata | Restrict match to database mydb and collection mockdata. |
| --nsFrom | mydb.$colname$ | For each collection matched by --nsInclude, match for database mydb and store the collection name in a variable called $colname$. |
| --nsTo | mydb.copyof_$colname$ | For each collection returned by --nsFrom, restore data to the namespace mydb.copyof_$colname$, where the variable $colname$ is replaced by its value. For example, mydb.mockdata would get restored as mydb.copyof_mockdata. |
This little trick can be very useful in the following scenarios:
In this recipe, we will look at how to set up a new replica set using a previously generated backup.
All you need is a MongoDB backup, preferably one generated in the Taking backup using mongodump tool recipe of Chapter 6, Managing MongoDB Backups. For this example, we will assume the database backup is present in /backups/dump/. Additionally, we will create individual directories for each mongod instance:
mkdir -p /data/server{1,2,3}/db
mongod --dbpath /data/server1/db --replSet MyReplicaSet --port 27017
rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "192.168.200.200:27017",
"ok" : 1
}
mongorestore --dir /backups/dump/
2017-10-08T13:25:17.215+0000 preparing collections to restore from
2017-10-08T13:25:17.216+0000 reading metadata for mydb.mockdata from /backups/dump/mydb/mockdata.metadata.json
2017-10-08T13:25:17.226+0000 restoring mydb.mockdata from /backups/dump/mydb/mockdata.bson
2017-10-08T13:25:17.252+0000 no indexes to restore
2017-10-08T13:25:18.385+0000 finished restoring mydb.mockdata (100000 documents)
2017-10-08T13:25:18.385+0000 done
use admin
db.shutdownServer({force: true})
cp -Rpf /data/server1/db/* /data/server2/db/
cp -Rpf /data/server1/db/* /data/server3/db/
mongod --dbpath /data/server1/db --replSet MyReplicaSet --port 27017
mongod --dbpath /data/server2/db --replSet MyReplicaSet --port 27018
mongod --dbpath /data/server3/db --replSet MyReplicaSet --port 27019
rs.add('192.168.200.200:27018')
rs.add('192.168.200.200:27019')
rs.status()
We begin by starting a single-node replica set running on port 27017. Next, in step 2, we connect to this replica set instance and initialize the replica set using the rs.initiate() command. For step 3, we switch to a different console window and restore the database backup using the mongorestore utility. In step 4, we switch back to the mongo shell, which is connected to our replica set node, and shut down the server using the db.shutdownServer({force: true}) command. By default, a replica set with no nodes will not shut down, so we use the force:true parameter to force it into bending the knee. In step 5, we copy the data from the primary instance's --dbpath to the respective paths for the other two instances. Next, in step 6, we start all three replica set instances with their respective --dbpath and --port parameters. In step 7, we connect to the primary replica set instance and add the other two instances. Lastly, we can check that the other two nodes have been added and are in sync using the rs.status() command.
Clearly, this is an overly simplified example of how to create a replica set from an existing backup. An alternate method would be to restore the backup on a single node, start the secondary nodes with no data in them, and simply add them to the replica set using the rs.add() command. As this method initiates an on-the-fly sync of data between the nodes, you should add the nodes one by one.
In a production environment, if the dataset is too large, it is rather preferred to do an initial sync using the data files and then let the nodes catch up with the primary node, once they are added.
In this recipe, we will be looking at how to restore a sharded cluster from a previously generated backup.
You will need a sharded cluster, with a minimum of a config server replica set (CSRS) and one shard. Refer to the Setting up and configuring a sharded cluster recipe in Chapter 5, High Scalability with Sharding, on how to create a sharded cluster.
Additionally, we will also need a previously generated backup from a sharded cluster; you can refer to the Creating a backup of a sharded cluster recipe in Chapter 6, Managing MogoDB Backups.
mongorestore -h 192.168.200.200 -p 27027 --drop --dir /backups/shard1bkp/
mongorestore -h 192.168.200.200 -p 27019 --drop --dir /backups/configbkp/
We begin by initializing the shard cluster, wherein first we ensure that the CSRS is created and initialized. Next, we start the shard servers and the mongos instances. Once the cluster is initialized, we shut down the mongos server to ensure no data is accidentally written to the cluster. We then restore data on each shard, on one instance as shown in step 2.
Next, in step 3, we stop the MongoDB shards, and in step 4, we restore the config server's data. Finally, once the data is restored, we start the shards and mongos instances. Once the entire cluster is up and running, we can connect to the mongos instance and run the db.printShardingStatus() command to confirm the health of the cluster.
The following recipes are included in this chapter:
Monitoring is perhaps one of the most crucial components of any production system. Keeping a close watch on the important metrics and signals emitted by a system can help gain wonderful insights into the system's usage and behavior. It can help you debug issues, identify and optimize bottlenecks, and avoid catastrophic failures. Thankfully, MongoDB comes bundled with useful tools and commands that help us monitor its health and take appropriate measures to ensure its optimal utilization. In this chapter, we will look at various tools such as mongostat and mongotop, work with database commands that give extremely useful metrics, monitor operating system subsystems, and monitor backups.
Let's get started!
In this recipe, we will take a look at the mongostat utility, which is bundled with standard MongoDB binary distributions. This tool gives good insights into MongoDB utilization and is also capable of being used programmatically.
For this recipe, at a minimum, you need a single-node MongoDB setup.
use mydb
for(var x=0; x<20000; x++){
db.mycol.insert({age:(Math.round(Math.random()*100)%20)});
db.mycol.findAndModify({query: {age: (Math.round(Math.random()*100)%20)}, update:{$inc: {age: 2}}});
db.mycol.remove({age:(Math.round(Math.random()*100)%20)});
}

mongostat -o 'host,metrics.document.inserted.rate()=insert_rate,metrics.document.inserted=inserted_count'

mongostat -n 1 --json | python -m json.tool
{
"localhost:27017": {
"arw": "0|0",
"command": "2|0",
"conn": "2",
"delete": "*0",
"faults": "0",
"flushes": "0",
"getmore": "0",
"insert": "*0",
"mapped": "",
"net_in": "158b",
"net_out": "27.1k",
"qrw": "0|0",
"query": "*0",
"res": "218M",
"time": "13:29:41",
"update": "*0",
"vsize": "2.99G"
}
}
We begin by connecting to the MongoDB instance using the mongo shell. In order to simulate operations, we execute a JavaScript code snippet that will insert a 100,000 documents in the database. This should give us a few seconds to switch to another terminal window and see mongostat in action.
In step 2, we execute the mongostat utility with no command-line parameters. The default output is pretty detailed and will execute indefinitely until you hit Ctrl + C. In the sample output, each column represents a particular metric, explained as follows:
|
Metric |
Description |
|
insert, query, update, and delete |
This denotes the rate of query type per second |
|
getmore |
This is the rate of cursor batch fetches per second |
|
command |
This denotes the number of commands per second |
|
flushes |
For MMAPv1, this represents the number of fsyncs per second, whereas for WiredTiger Engine, it represents the rate of checkpoints per polling interval |
|
mapped |
This is the size of total data mapped for the MMAPv1 storage engine |
|
vsize and res |
This is the virtual and resident memory size of the mongod/mongos process |
|
faults |
For MMAPv1 only, this represents the number of page faults per second |
|
qr and qw |
This denotes the queue length of active clients waiting for reads and writes respectively |
|
ar and aw |
This is the current number of active clients performing read and write operations |
The mongostat utility also allows customization of the fields in its output. In step 3, we execute mongostat with the -o option along with the list of fields that need to be displayed in the output. Note that we are adding the rate() function at the end of the insert metrics. This function shows the rate per second of the given metric. Additionally, mongostat also provides a diff() function that can show the difference between the current and the previous values of the given metric.
Lastly, in step 4, we take an overly simplistic example to show how mongostat can also generate output in JSON format. It can be an extremely handy tool to fetch metrics in a serialized format and parse it to a script.
The most ideal state for a replica set is when all nodes within the cluster are in sync. In this recipe, we will look at how to check the replication lag of nodes in a replica set.
We will need at least a multinode replica set cluster. You can refer to the Adding a node in a replica set recipe in Chapter 4, High Availability with Replication, for more details.
rs.printReplicationInfo()
configured oplog size: 1578.62548828125MB
log length start to end: 142363secs (39.55hrs)
oplog first event time: Sun Oct 08 2017 18:54:12 GMT+0530 (IST)
oplog last event time: Tue Oct 10 2017 10:26:55 GMT+0530 (IST)
now: Tue Oct 10 2017 10:26:59 GMT+0530 (IST)
rs.printSlaveReplicationInfo()
source: 192.168.200.200:27018
syncedTo: Tue Oct 10 2017 10:28:35 GMT+0530 (IST)
0 secs (0 hrs) behind the primary
source: 192.168.200.200:27019
syncedTo: Tue Oct 10 2017 10:28:35 GMT+0530 (IST)
0 secs (0 hrs) behind the primary
use mydb
for(var x=0; x<100000; x++){
db.mycol.insert({age:(Math.round(Math.random()*100)%20)});
db.mycol.findAndModify({query: {age: (Math.round(Math.random()*100)%20)}, update:{$inc: {age: 2}}});
db.mycol.remove({age:(Math.round(Math.random()*100)%20)});
}
rs.printSlaveReplicationInfo()
source: 192.168.200.200:27018
syncedTo: Tue Oct 10 2017 10:30:11 GMT+0530 (IST)
0 secs (0 hrs) behind the primary
source: 192.168.200.200:27019
syncedTo: Tue Oct 10 2017 10:29:45 GMT+0530 (IST)
26 secs (0.01 hrs) behind the primary
We begin by bringing up a multinode replica set cluster. In step 2, while connected to the primary instance of the replica set, we execute the rs.printReplicationInfo() command. This command prints out the details pertaining to oplog. Oplogs, or operation logs, are database operations performed on the primary node of a replica set. They are stored in a capped collection and are replayed on the secondary nodes within a replica set. This is the core mechanism of how nodes within a replica set are synchronized. Coming back to the rs.printReplicationInfo() command's output, we can see the details of the current size of the oplog when it started and the timestamps of the first and last events. This can be a good reference point to know the current state of replication in the cluster.
In step 3, we execute the rs.printSlaveReplicationInfo() command, which prints the point in time status of the last event in the respective replica set nodes. Each oplog entry has a timestamp associated with it; hence, by knowing the last event's timestamp, MongoDB can determine the lag between the secondary and the primary nodes of the replica set. The rs.printSlaveReplicationInfo() command does exactly this. If you type rs.printSlaveReplicationInfo (without the parenthesis), you can view the source of this command, and it will help you understand how this command simply iterates through each member of the replica set and calculates the difference between the timestamps. In our sample output, you can see that we have two secondary nodes in the replica set, and both are in sync with the primary.
In step 4, we run a simple JavaScript snippet that will help us simulate some traffic, and at the same time, we ensure that one of the replica set secondary nodes is turned off. As this snippet takes a few seconds to execute, we attempt to bring this secondary node online with the aim to create an artificial lag between it and the primary node. Immediately, switch to the primary node's mongo shell and execute the rs.printSlaveReplicationInfo() command. As shown in the sample output, the secondary node is now catching up to the primary, and hence, it is lagging behind by 26 seconds.
In a production environment, replication lags are pretty common. Factors such as network congestion, disk I/O limitations, memory limitations, and so on can be some of the causes for a node to start lagging. It is one of the most important metrics of a replica set that should always be monitored.
There is a very good Nagios plugin project that can be a good starting point to set up a monitor for a replica log. Even if you are not using the Nagios monitoring system, this plugin can still be used as a standalone monitoring script. Source code is available at: https://github.com/mzupan/nagios-plugin-mongodb.
In this recipe, we will look at how to find and monitor operations on MongoDB. This can help us keep an eye on any anomalous behavior or catch suboptimal queries.
All you need is a single-node MongoDB instance. Additionally, in order to simulate a busy production system, you may need to add a collection with a couple of million documents. If you are lazy like me, simply run the following:
for x in $(seq 30); do mongoimport -h 192.168.200.200 --type csv --headerline -d mydb-c mycol chapter_2_mock_data.csv;done
db.people.find({name: 'Foobar'})
use mydb
db.currentOp()
{
"inprog" : [
{
"desc" : "conn11",
"threadId" : "139774219482880",
"connectionId" : 11,
"client" : "127.0.0.1:33822",
"appName" : "MongoDB Shell",
"clientMetadata" : {
"application" : {
"name" : "MongoDB Shell"
},
"driver" : {
"name" : "MongoDB Internal Client",
"version" : "3.4.6"
},
"os" : {
"type" : "Linux",
"name" : "Ubuntu",
"architecture" : "x86_64",
"version" : "14.04"
}
},
"active" : true,
"opid" : 348294,
"secs_running" : 0,
"microsecs_running" : NumberLong(543337),
"op" : "query",
"ns" : "mydb.people",
"query" : {
"find" : "people",
"filter" : {
"name" : "Fubar"
}
},
"planSummary" : "COLLSCAN",
"numYields" : 7447,
"locks" : {
"Global" : "r",
"MMAPV1Journal" : "r",
"Database" : "r",
"Collection" : "R"
},
"waitingForLock" : false,
"lockStats" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(14896)
}
},
"MMAPV1Journal" : {
"acquireCount" : {
"r" : NumberLong(7448)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(7448)
}
},
"Collection" : {
"acquireCount" : {
"R" : NumberLong(7448)
}
}
}
},
<-- output truncated -->
In step 1, assuming we have a good number of records, we run a query for a document that does not exist. Hoping this query should run for a couple of seconds, in step 2, we open a second Terminal window and execute the db.currentOp() command. If you are unsuccessful on the first attempt, increase your dataset and try again.
The db.currentOp() command gives out a list of all running operations on the database. These can be server-initiated operations as well as client-run operations. Each operation comes with its own operation ID, which is represented as the opid key. This is a unique integer that can be used with the db.killOp() command if you wish to kill the operation.
There are a lot of variations to the command. For example, if you want to find all update operations for a particular database that are taking more than one second to execute, you can run the following command:
db.currentOp({
"active" : true,
"secs_running" : { "$gt" : 1 },
"op": "update",
"ns": /^mydb\./
})
As explained in Chapter 2, Understanding and Managing Indexes, MongoDB's performance is greatly dependent on the system's available memory and disk type. In this recipe, we will look at a few tools and MongoDB commands that can help us identify disk I/O utilization.
You need a single-node MongoDB instance.
db.serverStatus()
{
"host" : "vagrant-ubuntu-trusty-64",
"version" : "3.4.6",
"process" : "mongod",
"pid" : NumberLong(20657),
"uptime" : 5626,
"uptimeMillis" : NumberLong(5625427),
"uptimeEstimate" : NumberLong(5625),
"localTime" : ISODate("2017-10-10T10:02:49.306Z"),
"asserts" : {
"regular" : 0,
"warning" : 0,
"msg" : 0,
"user" : 0,
"rollovers" : 0
},
"backgroundFlushing" : {
"flushes" : 93,
"total_ms" : 457,
"average_ms" : 4.913978494623656,
"last_ms" : 4,
"last_finished" : ISODate("2017-10-10T10:02:04.054Z")
},
"connections" : {
"current" : 2,
"available" : 51198,
"totalCreated" : 41
},
<-- output truncated -->
The db.serverStatus() command gives an exhaustive list of metrics, each having its own merit. However, for this exercise, we will only concentrate on the ones that provide insights into the server's disk I/O:
In addition to MongoDB's internal metrics, it is extremely important to monitor the operating system's disk I/O metrics. Tools such as iostat should give a detailed view of the current disk utilization and help you identify bottlenecks.
For example, to view the disk I/O of /dev/sda at an interval of 2 seconds, run the following command:
iostat -x sda 2
This should give an output similar to this:

In this recipe, we will look at how to fetch metrics using Diamond metrics collector and send it to Graphite, a tool to store and view time series data.
You need a single-node MongoDB instance.
sudo apt install python-pip
sudo pip install diamond
mkdir /etc/diamond
mkdir /var/log/diamond
docker run -d\
--name graphite\
--restart=always\
-p 80:80\
-p 2003-2004:2003-2004\
-p 2023-2024:2023-2024\
-p 8125:8125/udp\
-p 8126:8126\
graphiteapp/graphite-statsd
/usr/local/bin/diamond -f

Diamond is an open source tool that was built to collect metrics and transfer them to handlers. Graphite is another open source suite that can be used to collect time series data and plot graphs against it.
In step 1, we download the Diamond application using the Python pip installer. Once downloaded, we copy the diamond.conf file supplied with this book to the /etc/diamond directory. If you are industrious, you can copy the sample configuration file available in the Diamond's git repository listed at the end of this recipe.
Next in step 3, we fetch the Graphite Docker image and start a container that initiates the Graphite UI as well as its carbon metrics collector.
In step 4, we start the Diamond server in the foreground, and in another Terminal window, we monitor the Diamond collector's log file to ensure it is able to emit metrics to the Graphite server. If all goes well, you should see the Graphite UI on your server's TCP port 80 with a screen similar to the preceding screenshot.
In this chapter, we will cover the following recipes:
Databases are notoriously overlooked when it comes to security. Many a times, engineers assume that, because their application abstracts the underlying database, the actual database systems are untouchable to the outside world. However, if you were to think of the first principles, you have to make sure your database systems are completely locked down, not only to the outside world but also within your infrastructure. Every application, user, or server that needs to communicate with the database system should do so through a well-established access control list (ACL) mechanism. Thankfully, MongoDB provides a great deal of features that can help facilitate robust authentication and authorization models. In this chapter, we will look at how to implement various authentication and authorization rules to ensure that your production systems are secure. We will begin by creating a superuser and enabling authentication in MongoDB. Lastly, we will look at various role-based access models, creating custom roles, and managing passwords.
In this recipe, we will look at how to create a superuser account in MongoDB and force MongoDB to use authentication.
We begin by connecting to a standard mongod instance and switch to the admin database. All users and roles are stored in the system.users and system.roles collections, respectively. These collections are stored in the admin database.
In step 2, we use the db.addUser() command to add a user with the options for its username, password, and the role. The roles field is an array that can be used to grant multiple roles, specific to a particular operation on a particular namespace. In our case, we create a user called superuser with password as supasecret. Additionally, we grant this user the role of root to the admin database. The role of root is pretty much the highest level role and it allows almost all operations to the system. By granting this role in the admin database, we ensure that this user has the administrative privileges to manage all users, databases, and clusters as well as restoration and backup privileges. We will look into the details of the roles in the next recipe.
Next, in step 4, we restart the mongod instance, but this time with an additional flag of --auth. This flag forces mongod to prevent any attempted operation until the client (user) is authenticated. We can see in step 4 that even the simplest of operations such as show dbs will not work until the user is authenticated.
In order to authenticate, we use the db.auth() command after selecting the admin database. This command accepts a username and password as its parameters and returns 1 if successful. Once a user is authenticated, you can do pretty much anything with this new superuser!
In this recipe, we will look at how to use built-in roles provided by MongoDB and assign them to users.
You should have a MongoDB instance with authentication enabled and an administrator account created. Refer to the first recipe of this chapter for more details.
use admin
db.auth('superadmin', 'supasecret')
use mydb
db.createUser(
{
user: "mydb_user",
pwd: "secret",
roles: [{role: "read", db: "mydb"}]
}
)
Successfully added user: {
"user" : "mydb_user",
"roles" : [
{
"role" : "read",
"db" : "mydb"
}
]
}
use mydb
db.auth('mydb_user', 'secret')
db.mockdata.count()
100000
db.foo.insert({bar:1})
WriteResult({
"writeError" : {
"code" : 13,
"errmsg" : "not authorized on mydb to execute command { insert: \"foo\", documents: [ { _id: ObjectId('59df6b03a535375680f37358'), bar: 1.0 } ], ordered: true }"
}
})
We begin by connecting to the mongod instance, switching to the admin database, and authenticating with the superadmin account. This is the same account we created in the first recipe of this chapter; it has root privileges. Next, in step 2, we switch to the mydb database and use the db.createUser() command to add a user. By doing so, we ensure that the user created is scoped to the mydb database. In the roles field, we add the built-in role read, limited to the mydb database. The read role is a MongoDB built-in role that grants access to a certain set of commands to any non-system database mentioned in the db field. MongoDB built-in roles are further classified into database-specific user roles, database-specific admin roles, backup and restoration roles, cluster management roles, and so on.
The following list of roles is available for database-specific user roles:
|
Role name |
Commands available in the role |
|
read |
|
|
readWrite |
|
In step 4, we disconnect from the mongo shell and reconnect back. We switch to the mydb database and authenticate using the newly created user, mydb_user. Once authenticated, we can run only specific queries (as listed in the preceding table) and any other operations field. This can be seen in the sample output, where a find() command works perfectly fine but an insert() query is denied.
If you wish to see the users and corresponding roles of a database, you can run the db.getUsers() command. The output would be an array of users associated with the database and their roles:
[
{
"_id" : "mydb.foo",
"user" : "foo",
"db" : "mydb",
"roles" : [ ]
},
{
"_id" : "mydb.mydb_user",
"user" : "mydb_user",
"db" : "mydb",
"roles" : [
{
"role" : "read",
"db" : "mydb"
}
]
}
]
For more information about other types of built-in roles, go through the following documentation: https://docs.mongodb.com/manual/reference/built-in-roles.
In this recipe, we will look at how to create a custom role and assign it to users. We will also have a quick look at how to add roles to and revoke roles from a user.
You will need a standard MongoDB installation. Additionally, we will continue from the previous recipe, where we had created a database user and assigned it a built-in role.
use admin
db.auth('superadmin', 'supasecret')
use mydb
db.createRole(
{
role: "InsertAndReadOnly",
privileges: [
{
actions: [ "find", "insert" ],
resource: { db: "mydb", collection: "mockdata" }
}
],
roles: []
}
)
db.getRole('InsertAndReadOnly' , { showPrivileges: true })
{
"role" : "InsertAndReadOnly",
"db" : "mydb",
"isBuiltin" : false,
"roles" : [ ],
"inheritedRoles" : [ ],
"privileges" : [
{
"resource" : {
"db" : "mydb",
"collection" : "mockdata"
},
"actions" : [
"find",
"insert"
]
}
],
"inheritedPrivileges" : [
{
"resource" : {
"db" : "mydb",
"collection" : "mockdata"
},
"actions" : [
"find",
"insert"
]
}
]
}
db.getUser('mydb_user')
{
"_id" : "mydb.mydb_user",
"user" : "mydb_user",
"db" : "mydb",
"roles" : [
{
"role" : "read",
"db" : "mydb"
}
]
}
db.revokeRolesFromUser(
"mydb_user",
[
{ role: "read", db: "mydb" }
]
)
db.getUser('mydb_user')
{
"_id" : "mydb.mydb_user",
"user" : "mydb_user",
"db" : "mydb",
"roles" : [ ]
}
db.grantRolesToUser(
"mydb_user",
[
{ role: "InsertAndReadOnly", db: "mydb" }
]
)
db.getUser('mydb_user')
{
"_id" : "mydb.mydb_user",
"user" : "mydb_user",
"db" : "mydb",
"roles" : [
{
"role" : "InsertAndReadOnly",
"db" : "mydb"
}
]
}
db.mockdata.insert({foo:'bar'})
WriteResult({ "nInserted" : 1 })
db.mockdata.remove({foo:'bar'})
WriteResult({
"writeError" : {
"code" : 13,
"errmsg" : "not authorized on mydb to execute command { delete: \"mockdata\", deletes: [ { q: { foo: \"bar\" }, limit: 0.0 } ], ordered: true }"
}
})
We begin by authenticating as the superadmin user previously created. In step 2, we switch to the database where we want to create the new role. By using the db.createRole() command, we create a new role called InsertAndReadOnly in the mydb database. Roles are defined as a tuple of actions and resources in which we grant a set of actions as an array of command names against a set of resources. The latter is a document that consists of the database name and the name of the collection. I would also like to point out that roles can inherit from other roles. This can be achieved by adding another key called roles in the db.createRole() command.
For example:
db.createRole(
{
role: "InsertAndReadOnly",
privileges: [
{
actions: [ "find", "insert" ],
resource: { db: "mydb", collection: "mockdata" }
}
],
roles: [{role: "<role>", db: "<database>"}]
}
)
The preceding command would create a new role with specified privileges as well as inherit the privileges from <role> and apply it to <database>.
Once the custom role is created, in step 3, we use the db.getRole() command to fetch the details of a given role. In step 5, we fetch the details of mydb_user by executing the db.getUser() command. This command lists the details for the given user, including the roles assigned to it. In our example, this user already has a built-in role, read, assigned to mydb. Let's change that.
In step 5, we revoke the user's roles by executing the db.revokeRolesFromUser() command and specifying the username as well as role(s) that need to be removed. We confirm that the role was revoked by executing db.getUser('mydb_user').
Next, in step 7, we add a role to the user by executing db.grantRolesToUser(). This command appends a role to the list of roles for a given user, so you need not worry about overwriting any previously assigned roles.
Lastly, we test the scope of the role by successfully inserting a document in the collection and, at the same time, not being able to remove a document from the collection.
When running database systems, people usually tend to only rely on authentication and forget to limit the scope of a given user's access. Roles can be an extremely useful aspect of authorization. By assigning roles and limiting the scope of a user's access to the databases, production database systems can be made more secure.
In this recipe, we will look at how to restore the password for a super administrator user.
You should have a MongoDB instance with authentication enabled and an administrator account created. Refer to the Setting up authentication in MongoDB and creating a superuser account recipe of this chapter.
mongod --dbpath /data/db
use admin
db.system.users.find({'roles.role': "root"})
{
"_id" : "admin.superadmin",
"user" : "superadmin",
"db" : "admin",
"credentials" : {
"SCRAM-SHA-1" : {
"iterationCount" : 10000,
"salt" : "X3IUy53syah8GEZozwwCPA==",
"storedKey" : "nzGTHW7yeKUjGxYWEJNkCkzcQZU=",
"serverKey" : "HiUfRvoW4MyiOQvtWk3FbLy4bvg="
}
},
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
db.changeUserPassword("superadmin", "Correct Horse Battery Staple")
mongod --dbpath /data/db --auth
use admin
db.auth("superadmin", "Correct Horse Battery Staple")
This is a fairly straightforward recipe. We begin by shutting down the mongod instance and starting it again without the --auth parameter. By doing so, we start the mongod instance without authentication support. Next, in step 3, we switch to the admin database and, in step 4, we execute a find() query to search for any users who already have the root built-in role assigned to them.
In step 6, we execute the db.changeUserPassword() command with the target username and its new password. Once we are done, we can stop and start the mongod instance again, this time with the --auth parameter to ensure that authentication is enabled.
On a running system, if you already have an administrative user, you can change any user's password using the db.changeUserPassword() command. You do not need to shut down the mongod instance.
For the most part in this chapter, we have discussed how to authenticate and authorize users in MongoDB. However, it is also equally important to ensure that unwanted servers do not get attached to a closed system like replica sets.
In this recipe, we will look at how to achieve inter-server authentication within a MongoDB replica set using key files.
You only need standard MongoDB binaries.
openssl rand -base64 756 > /data/keyfile
chmod 400 /data/keyfile
mongod --dbpath /data/server1/db --replSet MyReplicaSet --port 27017 --keyFile /data/keyfile
mongod --dbpath /data/server2/db --replSet MyReplicaSet --port 27018 --keyFile /data/keyfile
mongod --dbpath /data/server3/db --replSet MyReplicaSet --port 27019 --keyFile /data/keyfile
mongo localhost:27017
rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "vagrant-ubuntu-trusty-64:27017",
"ok" : 1
}
rs.add('192.168.200.200:27018')
rs.add('192.168.200.200:27019')
{ "ok" : 1 }
admin = db.getSiblingDB("admin")
admin.createUser(
{
user: "myadmin",
pwd: "supasecret",
roles: [ { role: "root", db: "admin" } ]
}
)
mongo 192.168.200.200:27017
use admin
db.auth('myadmin','supasecret')
A key file is nothing but a regular file that contains a secret. MongoDB allows a key file to contain 6 to 1,024 characters. In our example, we begin by creating a key file using the openssl utility; in that, we use the parameters rand -base64 756 to generate a set of random sequence of 1,024 Base64 characters.
Next, in step 2, we use the chmod utility available in Unix to ensure that this newly created key file is not globally readable. The command chmod 400 <filename> strips all permissions from the file except read-only to the file owner.
In step 3, we start three instances of MongoDB replica set nodes, each listening on a different port and using a different --dbpath. In a more realistic scenario, you would probably be running each instance on a single node. For more information on how to manage replica sets, refer to Chapter 2, Understanding and Managing Indexes. In addition to the standard parameters, we use the --keyFile parameter to mention that we will be using key-file-based internal authentication. The value of this parameter should be the location of the key file; in our example, it is /data/keyfile.
Once the replica set nodes are running, as shown in step 4, we connect to one of the nodes using the mongo shell from the same host as that of the mongod instance. It is extremely important that you connect from the same host; we will cover that in a moment.
In step 4, we initiate the replica set using the rs.initiate() command, followed by adding the other two nodes to the replica set using the rs.add() command.
Coming back to the point on why we connected using localhost... MongoDB provides a feature known as localhost exception. In it, MongoDB allows you to create the first user and role (optional) when none exist. This is extremely useful when you've enabled authentication mechanism but don't have an initial user to authenticate with. By using the --keyFile parameter, we enable not only internal authentication between MongoDB replica set nodes but also client authentication. Hence, if you enable key-file-based authentication, your clients (such as mongo shell) will not work unless they are first authenticated with a valid user.
So in step 7, we create the first superuser using the db.createUser() command and assign it the built-in role root on the admin database. This user should be able to do pretty much any operation on the cluster. We can confirm that the authentication works in step 8, when we attempt to connect to the primary node from an external system and authenticate using the newly created superuser.
In this recipe, we saw how to start a new replica set using key files. If you already have a replica set, you can still enable key-file-based authorization but you'll have to carefully sequence your steps, like so:
This chapter contains the following recipes:
Like all user-friendly applications, MongoDB is extremely easy to set up and run out of the box. The default settings provided by MongoDB may not be optimal for all workloads and this can prove expensive post deployment. Hence, it becomes extremely important to consider the nuances that are involved in setting up a robust MongoDB infrastructure from the get-go.
The aim of this chapter is to highlight the key points that one must consider when deploying and running MongoDB systems in a production environment. We will look at the aspects of configuring MongoDB, operating system settings, and selecting upgrade strategies. Furthermore, we will look at how to encrypt server-to-server communication using TLS certificates. We will also learn how to ensure that selective firewall rules are implemented to restrict access to MongoDB.
In this recipe, we will look at important factors that should be configured when setting up a MongoDB instance. These include MongoDB as well as operating system parameters.
You will need MongoDB binaries and a Linux operating system.
It is highly recommended by MongoDB that you choose the XFS filesystem over Ext4, especially when using the WiredTiger storage engine. It provides concurrent disk I/O, as well as extends (reduced fragmentation-based allocation of data), which provide significant performance improvement over Ext4. To create an XFS-based volume, simply do the following:
apt-get install xfsprogs
mkfs.xfs /dev/<device-name>
dd if=/dev/zero of=/dev/<disk> bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.3648 s, 787 MB/s
echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag
ulimits -a
-f (file size): unlimited
-t (cpu time): unlimited
-v (virtual memory): unlimited
-n (open files): 64000
-m (memory size): unlimited
-u (processes/threads): 64000
use admin
db.shutdownServer()
kill -2 <mongod pid>
In this recipe, we will look at how to upgrade MongoDB binaries in a replica set. This recipe holds true even for config and shard servers.
We will assume you have a three-node MongoDB replica set.
use admin
db.shutdownServer()
rs.stepDown()
When upgrading a sharded cluster, your sequence of steps should be as follows:
In this recipe, we will look at how to use X.509 certificates to encrypt traffic sent to MongoDB servers. Although TLS is the actual term used to denote Transport Layer Security (TLS), for legacy naming reasons, it is many a times still referred to as SSL.
You need the standard MongoDB binaries.
openssl genrsa -des3 -out ca.key 4096
openssl req -new -x509 -days 365 -key ca.key -out ca.crt
openssl genrsa -out server1.key 2048
openssl req -new -subj "/CN=server1.foo.com/O=ACME/C=AU" -key server1.key -out server1.csr
openssl x509 -req -days 365 -in server1.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server1.crt
cat server1.key server1.crt > server1.pem
mongod --dbpath /data/db --sslPEMKeyFile server1.pem --sslMode requireSSL
mongo --ssl --sslCAFile ca.crt server1.foo.com:27017
Although explaining how an SSL/TLS connection works would be out of the scope of this book, I will still try to give a short description of what we are trying to accomplish here. We begin by creating a CA public (ca.crt) and private key (ca.key). The private key will be used to sign any subsequent SSL certificates and can be verified by the CA's public key. In step 1, we use the openssl command to create our own CA key and certificate.
Creating self-signed certificates to be used by servers and clients using this CA is a three-step process. First, as shown in step 2, we generate a private key for our server, where we are going to start the mongod instance. Next, as shown in step 3, we create a CSR, also known as the CSR for this server. As you can see, the CN field has to match the hostname of the server; otherwise, your clients fail on hostname validation when attempting to connect to this server. Lastly in step 4, using this CSR, we generate a certificate for the server; it is signed by the private key of our CA.
That's it! You now have a fully functional self-signed SSL certificate to be used on this server. As MongoDB uses a .pem file, we concatenate the .key and .crt files, as shown in step 5. Note that the order of concatenation is important; that is, first the server1.key file and then the server1.crt file. This way, when the application reads the .pem file, it will read the certificate part first and then look for the key used to generate the certificate.
In step 6, we start the mongod instance and provide the path to the pem using the --sslPEMKeyFile parameter. Additionally, we have to mention the SSL mode using the --sslMode flag. The valid options for this flag are as follows:
In our case, we use requireSSL to force all connections to use only SSL mode.
Once the server is started, we can connect to it using the mongo client, while passing it the --ssl option. We also have to provide it with the CA file to validate the certificate presented by the server; this is done using the --sslCAFile flag.
There you have it! A simple yet robust method to encrypt all communications to your MongoDB service.
It is extremely important that the file permissions and ownership of the key files are kept secure. Assuming you are using mongodb as the username, change the ownership of the certificate and keys to user mongodb and file permissions to owner read-only, like so:
chown mongodb server1.key
chmod 600 server1.key
In addition to protocol encryption, MongoDB also allows server/client authentication using a certificate. In that, the server/client must present a valid certificate signed by the CA presented with the --sslCAFile file.
In this recipe, we will take a quick look at how to use Linux IPTables to add firewall rules that can restrict unwanted access to MongoDB processes.
You need standard MongoDB binaries on a Linux operating system. We are going to use Uncomplicated Firewall (UFW) tools, which is a handy wrapper built on top of IPTables. We assume that you have a three-node replica set running on the following hosts:
|
Hostname |
IP |
|
server1.foo.com |
10.1.1.1 |
|
server2.foo.com |
10.1.1.2 |
|
server3.foo.com |
10.1.1.3 |
apt-get install ufw
ufw enable
ufw allow from 10.1.1.1 to any port 27017
ufw allow from 10.1.1.2 to any port 27017
ufw allow from 10.1.1.3 to any port 27017
ufw deny from any to any port 27017
ufw status numbered

In the previous chapter, we looked at various methods to implementing authentication and authorization on MongoDB instances. As an avid believer in security by obscurity, I feel application servers should also have access restrictions in place, such that unwanted systems cannot simply connect to the application. In our overly simple example, we looked at how to restrict access to a three-node MongoDB replica set by only allowing access from their IPs to their respective ports (27017) and denying access to anyone else connecting to port 27017.
We began by installing Ubuntu's ufw package, in step 1. Next, in step 2, we enabled the UFW service. In step 3, we added three specific rules that allow access from the mentioned IP to any protocol/destination on port 27017. Finally, in step 4, we denied any incoming connection to port 27017.
How does this work? The firewall creates a list of rules, starting from the three allow rules and ending with the deny rule at the bottom. For any incoming connection to port 27017, if the IP of the client machine matches that in our rules, the connection is let through and any other connection is simply dropped. We can see the sequence of these rules by running the ufw status numbered command.
Once this simple firewall rule set is in place, you can further add the IPs of your application servers that will be connecting to the database.