There are a number of factors that affect how many Splunk indexers you will need, but starting with a model system with typical usage levels, the short answer is 100 gigabytes of raw logs per day per indexer. In the vast majority of cases, the disk is the performance bottleneck, except in the case of very slow processors.
The measurements mentioned next assume that you will spread the events across your indexers evenly, using the autoLB feature of the Splunk forwarder. We will talk more about this in indexer load balancing.
The model system looks like this:
- 8 gigabytes of RAM.
- If more memory is available, the operating system will use whatever Splunk does not use for the disk cache.
- Eight fast physical processors. On a busy indexer, two cores will probably be busy most of the time, handling indexing tasks. It is worth noting the following:
- More processors won...