Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Big Data Analytics with PySpark

You're reading from   Hands-On Big Data Analytics with PySpark Analyze large datasets and discover techniques for testing, immunizing, and parallelizing Spark jobs

Arrow left icon
Product type Paperback
Published in Mar 2019
Publisher Packt
ISBN-13 9781838644130
Length 182 pages
Edition 1st Edition
Languages
Tools
Concepts
Arrow right icon
Authors (3):
Arrow left icon
James Cross James Cross
Author Profile Icon James Cross
James Cross
Bartłomiej Potaczek Bartłomiej Potaczek
Author Profile Icon Bartłomiej Potaczek
Bartłomiej Potaczek
Rudy Lai Rudy Lai
Author Profile Icon Rudy Lai
Rudy Lai
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Installing Pyspark and Setting up Your Development Environment FREE CHAPTER 2. Getting Your Big Data into the Spark Environment Using RDDs 3. Big Data Cleaning and Wrangling with Spark Notebooks 4. Aggregating and Summarizing Data into Useful Reports 5. Powerful Exploratory Data Analysis with MLlib 6. Putting Structure on Your Big Data with SparkSQL 7. Transformations and Actions 8. Immutable Design 9. Avoiding Shuffle and Reducing Operational Expenses 10. Saving Data in the Correct Format 11. Working with the Spark Key/Value API 12. Testing Apache Spark Jobs 13. Leveraging the Spark GraphX API 14. Other Books You May Enjoy

Manipulating DataFrames with Spark SQL schemas

In this section, we will learn more about DataFrames and learn how to use Spark SQL.

The Spark SQL interface is very simple. For this reason, taking away labels means that we are in unsupervised learning territory. Also, Spark has great support for clustering and dimensionality reduction algorithms. We can tackle learning problems effectively by using Spark SQL to give big data a structure.

Let's take a look at the code that we will be using in our Jupyter Notebook. To maintain consistency, we will be using the same KDD cup data:

  1. We will first type textFile into a raw_data variable as follows:
raw_data = sc.textFile("./kddcup.data.gz")
  1. What's new here is that we are importing two new packages from pyspark.sql:
    • Row
    • SQLContext
  2. The following code shows us how to import these packages:
from pyspark.sql import...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image