Optimizing throughput
Throughput is of the utmost importance for a stream processor. Is a specific stream processor keeping pace with the rate of incoming events or is it falling behind? Is the processor overworked or is it spending too much time waiting? Are we putting too much load on the target resource? There are many ways we can optimize throughput to ensure that we are neither overloading nor wasting resources.
In the Living in an eventually consistent world section, we introduced SEDA and Little’s Law and discussed their overall impact on eventually consistent systems. Now, it is time to look at these again from the perspective of individual stream processors.
Let’s see how we can divide individual stream processors into stages and implement parallelism to optimize throughput.
Batch size function parameter
Batch size is one of the main parameters we have to adjust the performance of a stream processor. It controls how much work we feed to a processor...