This is the first chapter where we've really begun to test Node's scalability goal. Having considered the various arguments for and against different ways of thinking about concurrency and parallelism, we arrived at an understanding of how Node has successfully maintained the advantages of threading and parallel processing while wrapping all that complexity within a concurrency model that is both easy to reason about and robust.
Having gone deeper into how processes work, and in particular, how child processes can communicate with each other, even spawn further children, we looked at some use cases. An example of how to combine native Unix command processes seamlessly with custom Node processes led us to a performant and straightforward technique for processing large files. The cluster module was then applied to the problem of how to share responsibility...