Divvying up the data
In order to be resilient, Hazelcast apportions the overall data into slices known as partitions and spreads these across a cluster. To do this, it uses a consistent hashing algorithm on the data's key to assign it to a particular partition. In addition to this, the ownership of an entire partition and the data assigned to it is allocated to a particular node. By default, there are 271 partitions within a cluster.
This process allows for transparent and automatic fragmentation of data with tunable behavior while letting us ensure that any shared risks (such as nodes running on the same hardware or sharing the same data center rack) are militated against.
We can visualize the partitioning process in the following diagram: