Optimizing with Hashes
The previous time series implementation uses one Redis key for each second, minute, hour, and day. In a scenario where an event is inserted every second, there will be 87,865 keys in Redis in a full day (assuming a day starts at 00:00:00):
- 86,400 keys for the 1sec granularity (60 * 60 * 24).
- 1,440 keys for the 1min granularity (60 * 24).
- 24 keys for the 1hour granularity (24 * 1).
- 1 key for the 1day granularity.
This is an enormous number of keys per day, and this number grows linearly over time. A large number of keys is not very good for debugging, and each key has a memory cost that comes with it. In a benchmark test that we did—in which we inserted one event per second for 24 hours (86,400 events)—Redis allocated about 11 MB.
We can optimize this solution by using Hashes instead of Strings. Small Hashes are encoded in a different data structure, called a ziplist. This structure is memory-optimized. There are two conditions for a Hash to be encoded as a...