Introducing the pipeline pattern
The pipeline software design pattern is used in cases where data flows through a sequence of stages where the output of the previous stage is the input of the next. Each step can be thought of as a filter operation that transforms the data in some way. Buffering is frequently implemented between filters to prevent deadlock or data loss when one filter runs faster than another filter connected to it. Connecting the filters into a pipeline is analogous to function composition.
The following diagram depicts the flow of data from a data source, for example, a file. The data is transformed as it passes from one filter to the next, until the result is finally displayed on standard out in the console:
Grep sort example
The /etc/group
file is the data source. Grep is the first filter whose input is all the lines from the /etc/group
file. The grep
command removes all lines that do not begin with "com"
, and then sends its output to the Unix pipe, which sends that data...