Improving performance with common strategies
The performance of a pipeline refers to how quickly the data load can be processed. Throughput is defined as the volume of data that can be processed. In a big data system, both are important scalable metrics. Let's look at ways to improve performance:
- Increase the level of parallelism: The ability to break a large chunk into smaller independent chunks that can be executed in parallel.
- Better code: Efficient algorithms and code help to crunch through the same business transformations faster.
- Workflow that captures task dependencies: Not all tasks can run independently; there are inherent dependencies between tasks and pipelining or orchestration refers to chaining these dependencies as DAGs, where the inherent lineage determines which ones can run simultaneously and which ones need to wait until all the dependent stages have completed successfully. Even better would be the option to share compute for some of these...