One of the fundamental features of any streaming application is to process inbound data from different sources and produce an outcome instantaneously. Latency and throughput are the important initial considerations for that desired feature. In other words, performance of any streaming application is measured in terms of latency and throughput.
The expectation from any streaming application is to produce outcomes as soon as possible and to handle a high rate of incoming streams. Both factors have an impact on the choice of technology and hardware capacity to be used in streaming solutions. Before we understand their impact in detail, let's first understand the meanings of both terms.
Latency is defined as the unit of time (in milliseconds) taken by the streaming application in processing an event or group of events and producing an output after the events...