Creating an Azure Cosmos database
From the Azure portal, you can open the page for your Azure Cosmos DB account, open Data Explorer, and from there, click on New Database to create a new database, and New Container to create a container within the database. Here, we’ll use the Azure CLI instead:
az cosmosdb sql database create --account-name <your cosmos account name> -n codebreaker -g rg-codebreaker-test --throughput 400
This command creates a database named codebreaker
in the existing account. Setting the throughput option with this command defines the scale of the database. Here, all containers within this database share the 400 RU/s throughput. 400 is the smallest value that can be set. Instead of supplying this value when creating the database, scaling can also be configured with every container. In case some containers should not take away scaling from other containers, configure the RU/s with every container – but here, the minimum value to be used with each container is 400 as well.
After creating the database, let’s create a container:
az cosmosdb sql container create -g rg-codebreaker-test -a <your cosmos account name> -d codebreaker -n GamesV3, --partition-key-path "/PartitionKey"
The implementation of the gamesAPI
service uses a container named GamesV3
. This container is created within the previously created database, using the /PartitionKey
partition key, as was specified with the EF Core context in Chapter 3.
After this command is completed, check Data Explorer in the Azure portal, as shown in Figure 6.6:
Figure 6.6 – Data Explorer
You can see the database, the container, and, with the container, the configured partition key.
Configuring replication with Azure Cosmos DB
A great feature of Azure Cosmos DB is global data replication. Within the Azure portal, in the Settings category, click on Replicate data globally. Figure 6.7 shows the replication view:
Figure 6.7 – Replication with Azure Cosmos DB
You just need to click on the Azure regions that are available with your subscription to replicate data within the selected regions. You can also configure it to write to multiple regions.
With the codebreaker
application where users around the world can play, for faster performance for users in the US, Europe, Asia, and Africa, writing to multiple regions can be configured. For this option to be available, automatic scaling cannot be configured. For the best scalability across the globe, we also need to think about the partition key. By using different partition key values for every game that’s stored, games can be stored within different partitions.
Configuring consistency
With the Settings category in the Azure portal of Azure Cosmos DB, we can configure the default consistency level. The outcomes are shown using music notes, reading, and writing from multiple regions, as shown in Figure 6.8:
Figure 6.8 – Outcome shown using music notes
The default setting is Session consistency – the data is consistent within the same session. With this setting, write latencies, availability, and read throughput are comparable to Eventual consistency. Using the Azure Cosmos DB API, a session can be created and distributed within the application.
The Strong consistency option is not available if multiple regions are configured. With multiple regions, Bounded staleness can be configured, which specifies a maximum lag time and a number of maximum lag operations before the data is consistently replicated.
The database is now ready to use, so let’s publish Docker images to the registry!