Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Azure Data Engineer Associate Certification Guide

You're reading from   Azure Data Engineer Associate Certification Guide A hands-on reference guide to developing your data engineering skills and preparing for the DP-203 exam

Arrow left icon
Product type Paperback
Published in Feb 2022
Publisher Packt
ISBN-13 9781801816069
Length 574 pages
Edition 1st Edition
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Newton Alex Newton Alex
Author Profile Icon Newton Alex
Newton Alex
Arrow right icon
View More author details
Toc

Table of Contents (23) Chapters Close

Preface 1. Part 1: Azure Basics
2. Chapter 1: Introducing Azure Basics FREE CHAPTER 3. Part 2: Data Storage
4. Chapter 2: Designing a Data Storage Structure 5. Chapter 3: Designing a Partition Strategy 6. Chapter 4: Designing the Serving Layer 7. Chapter 5: Implementing Physical Data Storage Structures 8. Chapter 6: Implementing Logical Data Structures 9. Chapter 7: Implementing the Serving Layer 10. Part 3: Design and Develop Data Processing (25-30%)
11. Chapter 8: Ingesting and Transforming Data 12. Chapter 9: Designing and Developing a Batch Processing Solution 13. Chapter 10: Designing and Developing a Stream Processing Solution 14. Chapter 11: Managing Batches and Pipelines 15. Part 4: Design and Implement Data Security (10-15%)
16. Chapter 12: Designing Security for Data Policies and Standards 17. Part 5: Monitor and Optimize Data Storage and Data Processing (10-15%)
18. Chapter 13: Monitoring Data Storage and Data Processing 19. Chapter 14: Optimizing and Troubleshooting Data Storage and Data Processing 20. Part 6: Practice Exercises
21. Chapter 15: Sample Questions with Solutions 22. Other Books You May Enjoy

Maintaining metadata

As we have seen in Chapter 4, Designing a Partition Strategy, metastores are like data catalogs that contain information about all the tables you have, the table schemas, the relationships among them, where they are stored, and so on. In that chapter, we learned at a high level about how to access the metadata in Synapse and Databricks. Now, let's learn the details of implementing them.

Metadata using Synapse SQL and Spark pools

Synapse supports a shared metadata model. The databases and tables that use Parquet or CSV storage formats are automatically shared between the compute pools, such as SQL and Spark.

Important Note

Data created from Spark can only be read and queried by SQL pools but cannot be modified at the time of writing this book.

Let's look at an example of creating a database and a table using Spark and accessing it via SQL:

  1. In the Synapse Spark notebook, create a sample table, as shown in the following screenshot...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime