What this book covers
Chapter 1, Basics of SQL to Transform Data, explores the basics of SQL and demystifies this standard, powerful, yet easy-to-read language, which is ubiquitous when working with data.
You will understand the different types of commands in SQL, how to get started with a database, and the SQL commands to work with data. We will look a bit deeper into the SELECT
statement and the JOIN
logic, as they will be crucial in working with dbt. You will be guided to create a free Snowflake account to experiment the SQL commands and later use it together with dbt.
Chapter 2, Setting Up Your DBT Cloud Development Environment, gets started with DBT by creating your GitHub and DBT accounts. You will learn why version control is important and what the data engineering workflow is when working with DBT.
You will also understand the difference between the open source DBT Core and the commercial DBT Cloud. Finally, you will experiment with the default project and set up your environment for running basic SQL with DBT on Snowflake and understand the key functions of DBT: ref and source.
Chapter 3, Data Modeling for Data Engineering, shows why and how you describe data, and how to travel through different abstraction levels, from business processes to the storage of the data that supports them: conceptual, logical, and physical data models.
You will understand entities, relationships, attributes, entity-relationship (E-R) diagrams, modeling use cases and modeling patterns, Data Vault, dimensional models, wide tables, and business reporting.
Chapter 4, Analytics Engineering as the New Core of Data Engineering, showcases the full data life cycle and the different roles and responsibilities of people that work on data.
You will understand the modern data stack, the role of DBT, and analytic engineering. You will learn how to adopt software engineering practices to build data platforms (or DataOps), and about working as a team, not as a silo.
Chapter 5, Transforming Data with DBT, shows us how to develop an example application in dbt and learn all the steps to create, deploy, run, test, and document a data application with dbt.
Chapter 6, Writing Maintainable Code, continues the example that we started in the previous chapter, and we will guide you to configure dbt and write some basic but functionally complete code to build the three layers of our reference architecture: staging/storage, refined data, and delivery with data marts.
Chapter 7, Working with Dimensional Data, shows you how to incorporate dimensional data in our data models and utilize it for fact-checking and a multitude of purposes. We will explore how to create data models, edit the data for our reference architecture, and incorporate the dimensional data in data marts. We will also recap everything we learned in the previous chapters with an example.
Chapter 8, Delivering Consistency in Your Code, shows you how to add consistency to your transformations. You will learn how to go beyond basic SQL and bring the power of scripting into your code, write your first macros, and learn how to use external libraries in your projects.
Chapter 9, Delivering Reliability in Your Data, shows you how to ensure the reliability of your code by adding tests that verify your expectations and check the results of your transformations.
Chapter 10, Agile Development, teaches you how to develop with agility by mixing philosophy and practical hints, discussing how to keep the backlog agile through the phases of your projects, and a deep dive into building data marts.
Chapter 11, Collaboration, touches on a few practices that help developers work as a team and the support that dbt provides toward this.
Chapter 12, Deployment, Execution, and Documentation Automation, helps you learn how to automate the operation of your data platform, by setting up environments and jobs that automate the release and execution of your code following your deployment design.
Chapter 13, Moving beyond Basics, helps you learn how to manage the identity of your entities so that you can apply master data management to combine data from different systems. At the same time, you will review the best practices to apply modularity in your pipelines to simplify their evolution and maintenance. You will also discover macros to implement patterns.
Chapter 14, Enhancing Software Quality, helps you discover and apply more advanced patterns that provide high-quality results in real-life projects, and you will experiment with how to evolve your code with confidence through refactoring.
Chapter 15, Patterns for Frequent Use Cases, presents you with a small library of patterns that are frequently used for ingesting data from external files and storing this ingested data in what we call history tables. You will also get the insights and the code to ingest data in Snowflake.