Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Modern Data Architectures with Python

You're reading from  Modern Data Architectures with Python

Product type Book
Published in Sep 2023
Publisher Packt
ISBN-13 9781801070492
Pages 318 pages
Edition 1st Edition
Languages
Author (1):
Brian Lipp Brian Lipp
Profile icon Brian Lipp

Table of Contents (19) Chapters

Preface 1. Part 1:Fundamental Data Knowledge
2. Chapter 1: Modern Data Processing Architecture 3. Chapter 2: Understanding Data Analytics 4. Part 2: Data Engineering Toolset
5. Chapter 3: Apache Spark Deep Dive 6. Chapter 4: Batch and Stream Data Processing Using PySpark 7. Chapter 5: Streaming Data with Kafka 8. Part 3:Modernizing the Data Platform
9. Chapter 6: MLOps 10. Chapter 7: Data and Information Visualization 11. Chapter 8: Integrating Continous Integration into Your Workflow 12. Chapter 9: Orchestrating Your Data Workflows 13. Part 4:Hands-on Project
14. Chapter 10: Data Governance 15. Chapter 11: Building out the Groundwork 16. Chapter 12: Completing Our Project 17. Index 18. Other Books You May Enjoy

Spark schemas

Spark only supports schema on read and write, so you will likely find it necessary to define your schema manually. Spark has many data types. Once you know how to represent schemas, it becomes rather easy to create data structures.

One thing to keep in mind is that when you define a schema in Spark, you must also set its nullability. When a column is allowed to have nulls, then we can set it to True; by doing this, when a Null or empty field is present, no errors will be thrown by Spark. When we define a Struct field, we set three main components: the name, the data type, and the nullibility. When we set the nullability to False, Spark will throw an error when data is added to the DataFrame. It can be useful to limit nulls when defining the schema but keep in mind that throwing an error isn’t always the ideal reaction at every stage of a data pipeline.

When working with data pipelines, the discussion about dynamic schema and static schema will often come...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}