Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Big Data Analytics with PySpark

You're reading from  Hands-On Big Data Analytics with PySpark

Product type Book
Published in Mar 2019
Publisher Packt
ISBN-13 9781838644130
Pages 182 pages
Edition 1st Edition
Languages
Concepts
Authors (2):
Rudy Lai Rudy Lai
Profile icon Rudy Lai
Bartłomiej Potaczek Bartłomiej Potaczek
Profile icon Bartłomiej Potaczek
View More author details
Toc

Table of Contents (15) Chapters close

Preface 1. Installing Pyspark and Setting up Your Development Environment 2. Getting Your Big Data into the Spark Environment Using RDDs 3. Big Data Cleaning and Wrangling with Spark Notebooks 4. Aggregating and Summarizing Data into Useful Reports 5. Powerful Exploratory Data Analysis with MLlib 6. Putting Structure on Your Big Data with SparkSQL 7. Transformations and Actions 8. Immutable Design 9. Avoiding Shuffle and Reducing Operational Expenses 10. Saving Data in the Correct Format 11. Working with the Spark Key/Value API 12. Testing Apache Spark Jobs 13. Leveraging the Spark GraphX API 14. Other Books You May Enjoy

Separating logic from Spark engine-unit testing

Let's start by separating logic from the Spark engine.

In this section, we will cover the following topics:

  • Creating a component with logic
  • Unit testing of that component
  • Using the case class from the model class for our domain logic

Let's look at the logic first and then the simple test.

So, we have a BonusVerifier object that has only one method, quaifyForBonus, that takes our userTransaction model class. According to our login in the following code, we load user transactions and filter all users that are qualified for a bonus. First, we need to test it to create an RDD and filter it. We need to create a SparkSession and also create data for mocking an RDD or DataFrame, and then test the whole Spark API. Since this involves logic, we will test it in isolation. The logic is as follows:

package com.tomekl007.chapter_6...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime