Sharding was one of the features that MongoDB offered from an early stage, since version 1.6 was released in August 2010. Sharding is the ability to horizontally scale out our database by partitioning our datasets across different servers—the shards. Foursquare and Bitly are two of the most famous early customers for MongoDB that were also using sharding from its inception all the way to the general availability release.
In this article we will learn how to design a sharding cluster and how to make the single most important decision around it of choosing the unique shard key.
This article is a MongoDB shard tutorial taken from the book Mastering MongoDB 3.x by Alex Giamas.
Sharding is performed at the collection level. We can have collections that we don't want or need to shard for several reasons. We can leave these collections unsharded.
These collections will be stored in the primary shard. The primary shard is different for each database in MongoDB.
The primary shard is automatically selected by MongoDB when we create a new database in a sharded environment. MongoDB will pick the shard that has the least data stored at the moment of creation.
If we want to change the primary shard at any other point, we can issue the following command:
> db.runCommand( { movePrimary : "mongo_books", to : "UK_based" } )
We thus move the database named mongo_books to the shard named UK_based.
Choosing our shard key is the most important decision we need to make. The reason is that once we shard our data and deploy our cluster, it becomes very difficult to change the shard key.
First, we will go through the process of changing the shard key.
There is no command or simple procedure to change the shard key in MongoDB. The only way to change the shard key involves backing up and restoring all of our data, something that may range from being extremely difficult to impossible in high-load production environments.
The steps if we want to change our shard key are as follows:
From these steps, step 4 is the one that needs some more explanation.
MongoDB uses chunks to split data in a sharded collection. If we bootstrap a MongoDB sharded cluster from scratch, chunks will be calculated automatically by MongoDB. MongoDB will then distribute the chunks across different shards to ensure that there are an equal number of chunks in each shard.
The only case in which we cannot really do this is when we want to load data into a newly sharded collection.
The reasons are threefold:
These three limitations combined mean that letting MongoDB to figure it out on its own will definitely take much longer and may result in an eventual failure. This is why we want to presplit our data and give MongoDB some guidance on where our chunks should go.
Considering our example of the mongo_books database and the books collection, this would be:
> db.runCommand( { split : "mongo_books.books", middle : { id : 50 } } )
The middle command parameter will split our key space in documents that have id<=50 and documents that have id>50. There is no need for a document to exist in our collection with id=50 as this will only serve as the guidance value for our partitions.
In this example, we chose 50 assuming that our keys follow a uniform distribution (that is, the same count of keys for each value) in the range of values from 0 to 100.
After the previous section, it's now self-evident that we need to take into great consideration the choice of our shard key as it is something that we have to stick with.
A great shard key has three characteristics:
We will go over the definitions of these three properties first to understand what they mean.
High cardinality means that the shard key must have as many distinct values as possible. A Boolean can take only values of true/false, and so it is a bad shard key choice.
A 64-bit long value field that can take any value from −(2^63) to 2^63 − 1 and is a good example in terms of cardinality.
Low frequency directly relates to the argument about high cardinality. A low-frequency shard key will have a distribution of values as close to a perfectly random / uniform distribution.
Using the example of our 64-bit long value, it is of little use to us if we have a field that can take values ranging from −(2^63) to 2^63 − 1 only to end up observing the values of 0 and 1 all the time. In fact, it is as bad as using a Boolean field, which can also take only two values after all.
If we have a shard key with high frequency values, we will end up with chunks that are indivisible. These chunks cannot be further divided and will grow in size, negatively affecting the performance of the shard that contains them.
Non-monotonically changing values mean that our shard key should not be, for example, an integer that always increases with every new insert. If we choose a monotonically increasing value as our shard key, this will result in all writes ending up in the last of all of our shards, limiting our write performance.
The default and the most widely used sharding strategy is range-based sharding. This strategy will split our collection's data into chunks, grouping documents with nearby values in the same shard.
For our example database and collection, mongo_books and books respectively, we have:
> sh.shardCollection("mongo_books.books", { id: 1 } )
This creates a range-based shard key on id with ascending direction. The direction of our shard key will determine which documents will end up in the first shard and which ones in the subsequent ones.
This is a good strategy if we plan to have range-based queries as these will be directed to the shard that holds the result set instead of having to query all shards.
If we don't have a shard key (or can't create one) that achieves the three goals mentioned previously, we can use the alternative strategy of using hash-based sharding.
In this case, we are trading data distribution with query isolation.
Hash-based sharding will take the values of our shard key and hash them in a way that guarantees close to uniform distribution. This way we can be sure that our data will evenly distribute across shards.
The downside is that only exact match queries will get routed to the exact shard that holds the value. Any range query will have to go out and fetch data from all shards.
For our example database and collection (mongo_books and books respectively), we have:
> sh.shardCollection("mongo_books.books", { id: "hashed" } )
Similar to the preceding example, we are now using the id field as our hashed shard key.
Range-based sharding does not need to be confined to a single key. In fact, in most cases, we would like to combine multiple keys to achieve high cardinality and low frequency.
A common pattern is to combine a low-cardinality first part (but still having as distinct values more than two times the number of shards that we have) with a high-cardinality key as its second field. This achieves both read and write distribution from the first part of the sharding key and then cardinality and read locality from the second part.
On the other hand, if we don't have range queries, we can get away by using hash-based sharding on a primary key as this will exactly target the shard and document that we are going after.
To make things more complicated, these considerations may change depending on our workload. A workload that consists almost exclusively (say 99.5%) of reads won't care about write distribution. We can use the built-in _id field as our shard key and this will only add 0.5% load in the last shard. Our reads will still be distributed across shards.
Unfortunately, in most cases, this is not simple.
Due to government regulations and the desire to have our data as close to our users as possible, there is often a constraint and need to limit data in a specific data center. By placing different shards at different data centers, we can satisfy this requirement.
To summarize we learned about MongoDB sharding and got to know techniques to choose the correct shard key. Get the expert guide Mastering MongoDB 3.x today to build fault-tolerant MongoDB application.
MongoDB going relational with 4.0 release