In this chapter, we built from scratch our very own generic, extensible pipeline package using nothing more than the basic Go primitives. We have analyzed and implemented different strategies (FIFO, fixed/dynamic worker pools, and broadcasting) for processing data throughout the various stages of our pipeline. In the last part of the chapter, we applied everything that we have learned so far to implement a multistage crawler pipeline for the Links 'R' Us Project.
In summary, pipelines provide an elegant solution for breaking down complex data processing tasks into smaller and easier-to-test steps that can be executed in parallel to make better use of the compute resources available at your disposal. In the next chapter, we are going to take a look at a different paradigm for processing data that is organized as a graph.