-
Understand the fundamentals of Apache Spark to design robust and fast Spark applications
-
Explore various data manipulation components for each phase of your data engineering project
-
Prepare for the certification exam with sample questions and mock exams
-
Purchase of the print or Kindle book includes a free PDF eBook
Spark has become a de facto standard for big data processing. Migrating data processing to Spark saves resources, streamlines your business focus, and modernizes workloads, creating new business opportunities through Spark’s advanced capabilities. Written by a senior solutions architect at Databricks, with experience in leading data science and data engineering teams in Fortune 500s as well as startups, this book is your exhaustive guide to achieving the Databricks Certified Associate Developer for Apache Spark certification on your first attempt.
You’ll explore the core components of Apache Spark, its architecture, and its optimization, while familiarizing yourself with the Spark DataFrame API and its components needed for data manipulation. You’ll also find out what Spark streaming is and why it’s important for modern data stacks, before learning about machine learning in Spark and its different use cases. What’s more, you’ll discover sample questions at the end of each section along with two mock exams to help you prepare for the certification exam.
By the end of this book, you’ll know what to expect in the exam and gain enough understanding of Spark and its tools to pass the exam. You’ll also be able to apply this knowledge in a real-world setting and take your skillset to the next level.
This book is for data professionals such as data engineers, data analysts, BI developers, and data scientists looking for a comprehensive resource to achieve Databricks Certified Associate Developer certification, as well as for individuals who want to venture into the world of big data and data engineering. Although working knowledge of Python is required, no prior knowledge of Spark is necessary. Additionally, experience with Pyspark will be beneficial.
-
Create and manipulate SQL queries in Apache Spark
-
Build complex Spark functions using Spark's user-defined functions (UDFs)
-
Architect big data apps with Spark fundamentals for optimal design
-
Apply techniques to manipulate and optimize big data applications
-
Develop real-time or near-real-time applications using Spark Streaming
-
Work with Apache Spark for machine learning applications