Deploy a MongoDB 3.0 Replica Set With X.509 Authentication and Self-Signed Certificates

April 19th, 2015

MongoDB is a popular document-oriented NoSQL database that is easy to set up and has many powerful features. To improve data availability and prevent loss of important information, a database should be synchronized over various servers, ideally in geographically separated data centers. However, the connections between these instances have to be secured. In this article we will show how a simple deployment of MongoDB servers in multiple data centers can be set up using MongoDB’s Replica Set mechanism. Authentication is provided via SSL using self-signed X.509 certificates.

In the first step, it is assured that all software has been installed properly. Then we create an administrator user and generate necessary SSL certificates. Finally, we configure the MongoDB instances for replication.

Prerequisites

The servers you wish to use for the MongoDB cluster should be accessible via a static IP. Furthermore, the necessary packages have to be installed. In this example we will use virtual instances running Ubuntu 14.04 LTS.

For setting up MongoDB initially, follow the Official Installation Guide. In short, you can run the following commands to download and install the packages on Ubuntu:

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10  
echo "deb http://repo.mongodb.org/apt/ubuntu " \  
$(lsb_release -sc)"/mongodb-org/3.0 multiverse" \
| sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
sudo apt-get update  
sudo apt-get install -y mongodb-org  

Create an Admin User

When first connecting to the mongod daemon, we have to create a new user with administrative privileges. This has to be done on each server separately. Type mongo to start the MongoDB client.

The following commands create a new admin user named mongoAdmin with password mongotest. Additionally, necessary permissions to manage clusters and to have full access to any database are assigned to the user.

use admin  
db.createUser({ user: "mongoAdmin", pwd:"mongotest", roles: [ \  
{ role: "userAdminAnyDatabase", db: "admin" }, \
{ role: "dbAdminAnyDatabase", db: "admin" }, \
{ role: "readWriteAnyDatabase", db:"admin" }, \
{ role: "clusterAdmin",  db: "admin" } \
] })
quit()  

Now we are able to use the database on each server individually. In the next steps, we will set up the instances to work together properly.

Generate SSL Certificates

For the MongoDB servers to authenticate with each other, we need proper X.509 certificates. All of them have to be signed by the same CA and their subject lines (except the Common Name) have to be equal. The Common Name (CN) must be the same as the hostname of the specific machine MongoDB is running on. As using a public CA to sign certificates can be quite costly and is not necessary inside private infrastructure, we will create a self-signed CA certificate and sign the individual certificates with it.

To generate a new 8192 bit private key with OpenSSL and save it as mongoCA.key, use the following command:

openssl genrsa -out mongoCA.key -aes256 8192  

It is good practice to set a strong password for key encryption.

Now we can sign a new CA certificate. During certificate creation, some fields have to be filled out. They can be chosen arbitrarily but may correspond to your organization’s details.

openssl req -x509 -new -extensions v3_ca \  
-key mongoCA.key -days 365 -out mongoCA.crt

The created certificate is valid for one year. If you want to set another duration of validity, e.g. when deploying a production system, alter the :code:-days option accordingly.

To issue certificates for the MongoDB instances, we first generate certificate requests and then sign them using our CA certificate. For each of the servers, we run the following commands:

openssl req -new -nodes -newkey rsa:4096 \  
-keyout mongo1.key -out mongo1.csr
openssl x509 -CA mongoCA.crt -CAkey mongoCA.key -CAcreateserial \  
-req -days 365 -in mongo1.csr -out mongo1.crt
cat mongo1.key mongo1.crt > mongo1.pem  

Note: Make sure that all fields in the certificate request, excluding the Common Name, are the same every time! Else, the servers will not be able to authenticate! This problem may be hard to debug.

Alternatively, you can use the following Bash script to create and sign a certificate for a given hostname (first parameter). Before using it, edit the SUBJECT line to reflect your organization.

#!/bin/bash
if [ "$1" = "" ]; then  
echo 'Please enter a hostname (Common Name)!'
exit 1
fi
HOST_NAME="$1"  
SUBJECT="/C=DE/ST=Teststate/L=Testloc/O=Testorg/OU=Test/CN=$HOST_NAME"
openssl req -new -nodes -newkey rsa:4096 \  
-subj "$SUBJECT" -keyout $HOST_NAME.key -out $HOST_NAME.csr
openssl x509 -CA mongoCA.crt -CAkey mongoCA.key -CAcreateserial -req \  
-days 365 -in $HOST_NAME.csr -out $HOST_NAME.crt
rm $HOST_NAME.csr  
cat $HOST_NAME.key $HOST_NAME.crt > $HOST_NAME.pem  
rm $HOST_NAME.key  
rm $HOST_NAME.crt  

This script creates a ready-to-use .pem file in the current directory.

When all certificates have been generated successfully, copy mongoCA.crt as well as the corresponding <hostname>.pem to each server. Create a directory which only the MongoDB user can read and copy both files there:

sudo mkdir -p /etc/mongodb/ssl  
sudo chmod 700 /etc/mongodb/ssl  
sudo chown -R mongodb:mongodb /etc/mongodb
# If you created the certificates elsewhere, use scp or similar
sudo cp mongo1.pem /etc/mongodb/ssl/  
sudo cp mongoCA.crt /etc/mongodb/ssl/  

In our example we use the path /etc/mongodb/ssl to store the certificates.
The hostname of our server is mongo1.

Configure Hostnames and Firewall

For MongoDB to accept the hostnames in the certificates, we have to add them to /etc/hosts.
For three MongoDB instances, named mongo1 to mongo3, it may look as follows. The IP addresses have to be filled in, of course.

127.0.0.1 localhost
X.X.X.X mongo1  
Y.Y.Y.Y mongo2  
Z.Z.Z.Z mongo3  

To assure that the MongoDB instances can only be accessed from each other, you should add iptables rules on each server. For our example, a minimal configuration to only accept local connections
and such from other members on the default port 27017 may look as follows:

sudo iptables -A INPUT --src localhost -j ACCEPT  
sudo iptables -A INPUT -p tcp --src mongo1 --dport 27017 -j ACCEPT  
sudo iptables -A INPUT -p tcp --src mongo2 --dport 27017 -j ACCEPT  
sudo iptables -A INPUT -p tcp --src mongo3 --dport 27017 -j ACCEPT  
sudo iptables -A INPUT -p tcp --dport 27017 -j DROP  

This configuration is not persistent. See the Ubuntu Wiki on how to store the iptables rules and load them at startup automatically.

Configure MongoDB for SSL and Replication

After installing certificates and setting up the hostnames and firewall rules on all servers, we are ready to configure the MongoDB daemon.

As of version 2.6, MongoDB supports a new, YAML-based configuration scheme which is the preferred one to be used for new installations. However, the official packages come with the old syntax. Replace the contents of the config file in /etc/mongod.conf completely by the following information:

storage:  
dbPath: "/var/lib/mongodb"
systemLog:  
destination: file
path: "/var/log/mongodb/mongod.log"
logAppend: true
timeStampFormat: iso8601-utc
replication:  
replSetName: "rs1"
net:  
port: 27017
ssl:
mode: preferSSL
PEMKeyFile: /etc/mongodb/ssl/mongo1.pem
CAFile: /etc/mongodb/ssl/mongoCA.crt
clusterFile: /etc/mongodb/ssl/mongo1.pem
security:  
authorization: enabled
clusterAuthMode: x509

This keeps all default options such as the database and log paths. Additionally, it activates SSL certificate authentication and configures replication with a new replica set named rs1. If you want to restrict client connections to be SSL-exclusive, set the ssl.mode property to requireSSL. This should especially be done on production systems.

Note that then you have to supply the CA and node key files on every login with the mongo command. mongod instances use SSL even if this option is set to preferSSL.

Afterwards, we can restart :code:mongod and test the SSL connection. On our test host named :code:mongo1 this looks as follows:

sudo service mongod restart  
sudo mongo admin --ssl --sslCAFile /etc/mongodb/ssl/mongoCA.crt \  
--sslPEMKeyFile /etc/mongodb/ssl/mongo1.pem \
-u mongoAdmin -p mongotest --host mongo1

If the authentication was successful, the instances have been properly configured for SSL. Now we are ready to connect them to each other.

Setup the Replica Set

How to set up replication properly is described in-depth in the Official Replica Set Deployment Tutorial. If you are deploying MongoDB to geographically distributed data centers, you may want to read the corresponding guide as well.

In our example we will use an architecture of three servers named mongo1 to mongo3, whereas two (mongo1 and mongo2) reside in the same data center and one (mongo3) is located somewhere else. In a MongoDB Replica Set, all members elect one of them which becomes PRIMARY and is the only one accepting database writes. This can be compared to the Master in other database systems. More information about how MongoDB handles roles in replica set can be found in the official documentation section Replica Set Members. For possible architectures of Replica Set deployments, see Replica Set Deployment Architectures.

On one instance, preferably the one we expect to become PRIMARY (here mongo1), we start replication and add the other nodes. At first we connect to the database:

mongo admin -u mongoAdmin -p mongotest  

Now we initiate the MongoDB replica set:

rs.initiate()  

If the command returned successfully, we can add the hostnames of the other servers to the set:

rs.add("mongo2")  
rs.add("mongo3")  

The names are resolved to the IP addresses specified in /etc/hosts.

After adding the nodes, your connection may be terminated due to the server re-configuring. This does not indicate failure. Simply connect again using the client program.

The replication should be up and running now and one instance got assigned the PRIMARY role.

Configure Instances in Other Data Centers to Not Become Primary

Due to the way MongoDB handles the assignment of roles in replicated clusters, the majority of nodes should be in one data center. Furthermore, the other nodes should be prevented from becoming primary ones by setting their associated priority to zero. See the MongoDB documentation on why doing this is important.

At first we authenticate on the now-primary instance locally:

mongo admin -u mongoAdmin -p mongotest  

You can view the current state of the Replica Set with the following command:

rs.conf()  

If one of your instances resides in another data center, the Replica Set has to be re-configured. To do this, store the configuration in a temporary variable and assign all members from other data centers
a priority value of 0. The reconfig function then applies the changed configuration to the current replica set.

cfg = rs.conf()  
// For each member X in another data center do
cfg.members[X].priority = 0  
// This may terminate your connection (intended)
rs.reconfig(cfg)  

Finally, you should have a fully functional replicated MongoDB setup. This can be verified by executing the following command:

rs.printSlaveReplicationInfo()  

In case of success, the output will state that all secondary instances are in sync with the primary one.