Chapter 1, Introduction to Apache Spark, provides an introduction to Spark 2.0. It provides a brief description of different Spark components, including Spark Core, Spark SQL, Spark Streaming, machine learning, and graph processing. It also discusses the advantages of Spark compared to other similar frameworks.
Chapter 2, Apache Spark Installation, provides a step-by-step guide to installing Spark on an AWS EC2 instance from scratch. It also helps you install all the prerequisites, such as Python, Java, and Scala.
Chapter 3, Spark RDD, explains Resilient Distributed Datasets (RDD) APIs, which are the heart of Apache Spark. It also discusses various transformations and actions that can be applied on an RDD.
Chapter 4, Spark DataFrame and Dataset, covers Spark's structured APIs: DataFrame and Dataset. This chapter also covers various operations that can be performed on a DataFrame or Dataset.
Chapter 5, Spark Architecture and Application Execution Flow, explains the interaction between different services involved in Spark application execution. It explains the role of worker nodes, executors, and drivers in application execution in both client and cluster mode. It also explains how Spark creates a Directed Acyclic Graph (DAG) that consists of stages and tasks.
Chapter 6, Spark SQL, discusses how Spark gracefully supports all SQL operations by providing a Spark-SQL interface and various DataFrame APIs. It also covers the seamless integration of Spark with the Hive metastore.
Chapter 7, Spark Streaming, Machine Learning, and Graph Analysis, explores different Spark APIs for working with real-time data streams, machine learning, and graphs. It explains the candidature of features based on the use case requirements.
Chapter 8, Spark Optimizations, covers different optimization techniques to improve the performance of your Spark applications. It explains how you can use resources such as executors and memory in order to better parallelize your tasks.