Latency is the difference between the time assigned to an event (usually parsed from the text) and the time it was written to the index. These times are captured in _time and _indextime, respectively.
This query will show us what our latency is:
sourcetype=impl_splunk_gen | eval latency = _indextime - _time | stats min(latency) avg(latency) max(latency)
In my case, these statistics look as shown in the following screenshot:
The latency in this case is exaggerated, because the script behind impl_splunk_gen is creating events in chunks. In most production Splunk instances, the latency is just a few seconds. If there is any slowdown, perhaps because of network issues, the latency may increase dramatically, and so it should be accounted for.
This query will produce a table showing the time for every event:
sourcetype=impl_splunk_gen | eval...