No matter how advanced and well-scaled your Splunk infrastructure is, if all scheduled reports and alerts are running at the same time, the system will start experiencing performance issues. Typically, you will receive a Splunk message saying that you have reached the limit of concurrent or historical searches. There are only a certain number of searches that can be run on fixed CPU capacity for each Splunk server or collection of servers. A common problem a Splunk administrator will inevitably face is how to limit the number of searches running at the same time. One way to fix this is to throw more servers into you Splunk environment, but that is not a cost-efficient way.
It is important to properly stagger and plan scheduled searches, reports, alerts, dashboards, and so on, ensuring they are not all running at the same time. In addition to the schedule time...