Chapter 2. The Spark Programming Model
Large-scale data processing using thousands of nodes with built-in fault tolerance has become widespread due to the availability of open source frameworks, with Hadoop being a popular choice. These frameworks are quite successful in executing specific tasks such as Extract, Transform, and Load (ETL) and storage applications that deal with web-scale data. However, developers were left with a myriad of tools to work with, along with the well-established Hadoop ecosystem. There was a need for a single, general-purpose development platform that caters to batch, streaming, interactive, and iterative requirements. This was the motivation behind Spark.
The previous chapter outlined the big data analytics challenges and how Spark addressed most of them at a very high level. In this chapter, we will examine the design goals and choices involved in the making of Spark to get a clearer understanding of its suitability as a data science platform for big...