Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Business Intelligence with Databricks SQL
Business Intelligence with Databricks SQL

Business Intelligence with Databricks SQL: Concepts, tools, and techniques for scaling business intelligence on the data lakehouse

eBook
€19.99 €28.99
Paperback
€35.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Business Intelligence with Databricks SQL

Introduction to Databricks

Databricks is one of the most recognizable names in the big data industry. They are the providers of the lakehouse platform for data analytics and artificial intelligence (AI). This book is about Databricks SQL, a product within the Databricks Lakehouse platform that powers data analytics and business intelligence.

Databricks SQL is a rapidly evolving product. It is not a traditional data warehouse, yet its users are the traditional data warehouse and business intelligence users. It claims to provide all the functionality of data warehouses on what is essentially a data lake. This concept can be a bit jarring. It can create resistance in adoption as you might be wondering if your skills are transferrable, or if your work might be disrupted as a result of a new learning curve.

Hence, I am writing this book.

The primary intent of this book is to help you learn the fundamental concepts of Databricks SQL in a fun, follow-along interactive manner. My aim is that by the time you complete this book, you will be confident in your adoption of Databricks SQL as the enabler of your business intelligence.

This book does not intend to be a definitive guide or a complete reference, nor does it intend to be a replacement for the official documentation. It is too early for either of those. This book is your initiation into business intelligence on the data lakehouse, the Databricks SQL way.

Let’s begin!

In this chapter, we’ll cover the following topics:

  • An overview of Databricks, the company
  • An overview of the Lakehouse architecture
  • An overview of the Databricks Lakehouse platform

Technical requirements

There are no technical requirements for this chapter. However, familiarity with the concept of databases, data warehouses, and data lakes will help.

An overview of Databricks, the company

Databricks was founded in 2013 by seven researchers at the University of California, Berkeley.

This was the time when the world was learning how the Meta, Amazon, Netflix, Google, and Apple (MANGA) companies had built their success by scaling up their use of AI techniques in all aspects of their operations. Of course, they could do this because they invested heavily in talent and infrastructure to build their data and AI systems. Databricks was founded with the mission to enable everyone else to do the same – use data and AI in service of their business, irrespective of their size, scale, or technological prowess.

The mission was to democratize AI. What started as a simple platform, leveraging the open source technologies that the co-founders of Databricks had created, has now evolved into the lakehouse platform, which unifies data, analytics, and AI in one place.

As an interesting side note, and my opinion: To this date, I meet people and organizations that equate Databricks with Apache Spark. This is not correct. The platform indeed debuted with a cloud service for running Apache Spark. However, it is important to understand that Apache Spark was the enabling technology for the big data processing platform. It was not the product. The product is a simple platform that enables the democratization of data and AI.

Databricks is a strong proponent of the open source community. A lot of popular open source projects trace their roots to Databricks, including MLflow, Koalas, and Delta Lake. The profile of these innovations demonstrates the commitment to Databricks’s mission statement of democratizing data and AI. MLflow is an open source technology that enables machine learning (ML) operations or MLOps. Delta Lake is the key innovation that brings reliability, governance, and simplification to data engineering and business intelligence operations on the data lake. It is the key to building the lakehouse on top of cloud storage systems such as Amazon Web Service’s Simple Storage Service (S3), Microsoft Azure’s Azure Data Lake Storage (ADLS), and Google Cloud Storage (GCS), as well as on-premises HDFS systems.

Within the Databricks platform, these open source technologies are firmed up for enterprise readiness. They are blended with platform innovations for various data personas such as data engineers, data scientists, and data analysts. This means that MLflow within the Databricks Lakehouse platform powers enterprise-grade MLOps. Delta Lake within the Databricks Lakehouse platform powers enterprise-grade data engineering and data governance. With the Databricks SQL product, the Databricks Lakehouse platform can power all the business intelligence needs for the enterprise as well!

Technologies and Trademarks

Throughout this book we will refer to trademarked technologies and products. Some notable examples are Apache Spark™, Hive™, Delta Lake™, Power BI™, Tableau™ and others that are inadvertently mentioned.

All such trademarks are implied whenever we mention them in the book. For the sake of brevity and readability, I will omit the use of the ™ symbol in the rest of the book.

An overview of the Lakehouse architecture

If, at this point, you are a bit confused with so many terms such as databricks, lakehouse, Databricks SQL, and more – worry not. We are just at the beginning of our learning journey. We will unpack all of these throughout this book.

First, what is Databricks?

Databricks is a platform that enables enterprises to quickly build their Data Lakehouse infrastructure and enable all data personas – data engineers, data scientists, and business intelligence personnel – in their organization to extract and deliver insights from the data. The platform provides a curated experience for each data persona, enabling them to execute their daily workflows. The foundational technologies that enable these experiences are open source – Apache Spark, Delta lake, MLflow, and more.

So, what is the Lakehouse architecture and why do we need it?

The Lakehouse architecture was formally presented at the Conference on Innovative Data Systems Research (CIDR) in January 2021. You can download it from https://databricks.com/research/lakehouse-a-new-generation-of-open-platforms-that-unify-data-warehousing-and-advanced-analytics. This is an easily digestible paper that I encourage you to read for the full details. That said, I will now summarize the salient points from this paper.

Attribution, Where it is Due

In my summary of the said research paper, I am recreating the images that were originally provided. Therefore, they are the intellectual property of the authors of the research paper.

According to the paper, most of the present-day data analytics infrastructures look like a two-tier system, as shown in the following diagram:

Figure 1.1 – Two-tier data analytics infrastructures

Figure 1.1 – Two-tier data analytics infrastructures

In this two-tier system, first, data from source systems is brought onto a data lake. Examples of source systems could be your web or mobile application, transactional databases, ERP systems, social media data, and more. The data lake is typically an on-premises HDFS system or cloud object storage. Data lakes allow you to store data in big data-optimized file formats such as Apache Parquet, ORC, and Avro. The use of these open file formats enables flexibility in writing to the data lake (due to schema-on-read semantics). This flexibility enables faster ingestion of data, which, in turn, enables faster access to data for end users. It also enables more advanced analytics use cases in ML and AI.

Of course, this architecture still needs to support the traditional BI workloads and decision support systems. Hence, a second process, typically in the form of Extract, Transform, and Load (ETL), is built to copy data from the data lake to a dedicated data warehouse.

Close inspection of the two-tier architecture reveals several systemic problems:

  • Duplication of data: This architecture requires the same data to be present in two different systems. This results in an increased cost of storage. Constant reconciliation between these two systems is of utmost importance. This results in increased ETL operations and its associated costs.
  • Security and governance: Data lakes and data warehouses have very different approaches to the security of data. This results in different security mechanisms for the same data that must always be in synchronization to avoid data security violations.
  • Latency in data availability: In the two-tier architecture, the data is only moved to the warehouse by a secondary process, which introduces latency. This means analysts do not get access to fresh data. This also makes it unsuitable for tactical decision support such as operations.
  • Total cost of ownership: Enterprises end up paying double for the same data. There are two storage systems, two ETL processes, two engineering debts, and more.

As you can see, this is unintuitive and unsustainable.

Hence, the paper presents the Lakehouse architecture as the way forward.

Simply put, the data lakehouse architecture is a data management system that implements all the features of data warehouses on data lakes. This makes the data lakehouse a single unified platform for business intelligence and advanced analytics.

This means that the lakehouse platform will implement data management features such as security controls, ACID transaction guarantees, data versioning, and auditing. It will implement query performance features such as indexing, caching, and query optimizations. These features are table stakes for data warehouses. The Lakehouse architecture brings these features to you in the flexible, open format data storage of data lakes. A Lakehouse is a platform that provides data warehousing capabilities and advanced analytics capabilities for the same platform, with cloud data lake economics.

What is the Formal Definition of the Lakehouse?

Section 3 in the CIDR paper officially defines the Lakehouse. Check it out.

The following is a visual depiction of the Lakehouse:

Figure 1.2 – Lakehouse architecture

Figure 1.2 – Lakehouse architecture

The idea of the Lakehouse is deceptively simple – as all good things in life are! The Lakehouse architecture immediately solves the problems we highlighted about present-day two-tier architectures:

  • A single storage layer means no duplication of data and no extra effort to reconcile data. Reduced ETL requirements and ACID guarantees equate to the stability and reliability of the system.
  • A single storage layer means a single model of security and governance for all data assets. This reduces the risk of security breaches.
  • A single storage layer means the availability of the freshest data possible for the consumers of the data.
  • Cheap cloud storage with elastic, on-demand cloud compute reduces the total cost of ownership.
  • Open source technologies in the storage layer reduce the chances of vendor lock-in and make it easy to integrate with other tools.

Of course, any implementation of the Lakehouse will have to ensure the following:

  • Reliable data management: The Lakehouse proposes to eliminate (or reduce) data warehouses. Hence, the Lakehouse implementation must efficiently implement data management and governance – features that are table stakes in data warehouses.
  • SQL performance: The Lakehouse will have to provide state-of-the-art SQL performance on top of the open-access filesystems and file formats typical in data lakes.

This is where the Databricks Lakehouse platform, and within it, the Databricks SQL product, comes in.

An overview of the Databricks Lakehouse platform

The Databricks Lakehouse platform enables enterprises to build their Lakehouse by providing simplified data engineering and data management techniques. The Databricks Lakehouse platform also provides one of the best ML experiences for data scientists and ML engineers.

Finally, Databricks SQL brings in the last piece of the puzzle – a home for the business intelligence and data analyst personas with a first-class workbench that allows query editing, building visualizations, and publishing dashboards. It also allows plug-and-play with downstream business intelligence tools such as Power BI, Tableau, Looker, and more. All of this is backed by state-of-the-art SQL query performance.

The following diagram represents the Databricks Lakehouse platform:

Figure 1.3 – The Databricks Lakehouse platform

Figure 1.3 – The Databricks Lakehouse platform

The lakehouse platform by Databricks is a simple, open, and collaborative platform that combines the reliability, performance, and governance capabilities of data warehouses with the openness, flexibility, and economies of cloud data lakes.

Summary

In this chapter, we learned about Databricks as a company and the Databricks Lakehouse platform as the product of this company, which enables the democratization of data and AI for all organizations. We are now ready to begin exploring Databricks SQL.

In the next chapter, Chapter 2, The Databricks Product Suite A Visual Tour, we will start with a tour of the Databricks Lakehouse platform.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn about business intelligence on the lakehouse with features and functions of Databricks SQL
  • Make the most of Databricks SQL by getting to grips with the enablers of its data warehousing capabilities
  • A unique approach to teaching concepts and techniques with follow-along scenarios on real datasets

Description

In this new era of data platform system design, data lakes and data warehouses are giving way to the lakehouse – a new type of data platform system that aims to unify all data analytics into a single platform. Databricks, with its Databricks SQL product suite, is the hottest lakehouse platform out there, harnessing the power of Apache Spark™, Delta Lake, and other innovations to enable data warehousing capabilities on the lakehouse with data lake economics. This book is a comprehensive hands-on guide that helps you explore all the advanced features, use cases, and technology components of Databricks SQL. You’ll start with the lakehouse architecture fundamentals and understand how Databricks SQL fits into it. The book then shows you how to use the platform, from exploring data, executing queries, building reports, and using dashboards through to learning the administrative aspects of the lakehouse – data security, governance, and management of the computational power of the lakehouse. You’ll also delve into the core technology enablers of Databricks SQL – Delta Lake and Photon. Finally, you’ll get hands-on with advanced SQL commands for ingesting data and maintaining the lakehouse. By the end of this book, you’ll have mastered Databricks SQL and be able to deploy and deliver fast, scalable business intelligence on the lakehouse.

Who is this book for?

This book is for business intelligence practitioners, data warehouse administrators, and data engineers who are new to Databrick SQL and want to learn how to deliver high-quality insights unhindered by the scale of data or infrastructure. This book is also for anyone looking to study the advanced technologies that power Databricks SQL. Basic knowledge of data warehouses, SQL-based analytics, and ETL processes is recommended to effectively learn the concepts introduced in this book and appreciate the innovation behind the platform.

What you will learn

  • Understand how Databricks SQL fits into the Databricks Lakehouse Platform
  • Perform everyday analytics with Databricks SQL Workbench and business intelligence tools
  • Organize and catalog your data assets
  • Program the data security model to protect and govern your data
  • Tune SQL warehouses (computing clusters) for optimal query experience
  • Tune the Delta Lake storage format for maximum query performance
  • Deliver extreme performance with the Photon query execution engine
  • Implement advanced data ingestion patterns with Databricks SQL

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 16, 2022
Length: 348 pages
Edition : 1st
Language : English
ISBN-13 : 9781803235332
Vendor :
Databricks
Category :
Languages :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Sep 16, 2022
Length: 348 pages
Edition : 1st
Language : English
ISBN-13 : 9781803235332
Vendor :
Databricks
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 114.97
Azure Data Engineering Cookbook
€39.99
SQL Server Query Tuning and Optimization
€38.99
Business Intelligence with Databricks SQL
€35.99
Total 114.97 Stars icon

Table of Contents

20 Chapters
Part 1: Databricks SQL on the Lakehouse Chevron down icon Chevron up icon
Chapter 1: Introduction to Databricks Chevron down icon Chevron up icon
Chapter 2: The Databricks Product Suite – A Visual Tour Chevron down icon Chevron up icon
Chapter 3: The Data Catalog Chevron down icon Chevron up icon
Chapter 4: The Security Model Chevron down icon Chevron up icon
Chapter 5: The Workbench Chevron down icon Chevron up icon
Chapter 6: The SQL Warehouses Chevron down icon Chevron up icon
Chapter 7: Using Business Intelligence Tools with Databricks SQL Chevron down icon Chevron up icon
Part 2: Internals of Databricks SQL Chevron down icon Chevron up icon
Chapter 8: The Delta Lake Chevron down icon Chevron up icon
Chapter 9: The Photon Engine Chevron down icon Chevron up icon
Chapter 10: Warehouse on the Lakehouse Chevron down icon Chevron up icon
Part 3: Databricks SQL Commands Chevron down icon Chevron up icon
Chapter 11: SQL Commands – Part 1 Chevron down icon Chevron up icon
Chapter 12: SQL Commands – Part 2 Chevron down icon Chevron up icon
Part 4: TPC-DS, Experiments, and Frequently Asked Questions Chevron down icon Chevron up icon
Chapter 13: Playing with the TPC-DS Dataset Chevron down icon Chevron up icon
Chapter 14: Ask Me Anything Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.6
(13 Ratings)
5 star 61.5%
4 star 38.5%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




AL Oct 25, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Reading this book is like learning from a private tutor. The book covers the Databricks Lakehouse architecture, data security and governance. The author uses simple, easy to understand diagrams and examples to illustrate how various components such as Delta Lake, Unity Catalog, SQL Warehouse, Photon etc fits within the Lakehouse architecture. Whether you are new or an experienced Databricks users, you will find something useful and refreshing. The lab exercises on Databricks SQL is fun to follow as it uses real world data sets and the benchmarking using TPC-DS dataset is a bonus!
Amazon Verified review Amazon
amazonVerifiedBuyer Dec 10, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
It is well written, with good explanations, and concrete examples, and it covers a large variety of topics. The author knows what he is talking about.The writing and examples are all very clear, and there are plenty of figures and code snippets to help you follow along, as well as some diagrams to explain certain concepts. What is particularly good about this book, it's that it is written from experience on the fields. It's not academic book describing how things should work. It is actually coming from experience with real projects.I had the opportunity to review the book Business Intelligence with Databricks SQL by Vihag Gupta, a master practitioner of the Databricks platform.This book is great for both newcomers and experienced users of the Databricks platform. It starts with a nice overview that can get anyone up to speed in no time. However, as you progress, Vihag will start to dig deeper into traditional SQL and Databricks SQL (including Photon).If you have been curious about Photon, this book will allow you to better understand the internals and know how to write efficient, optimal queries.
Amazon Verified review Amazon
Kieran O'Driscoll Nov 13, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book walks through each component of the Databricks Lakehouse platform and covers the basics along with more technical aspects. The section on the Photon engine was extremely helpful in understanding how Databricks SQL is so powerful.A great read for anyone getting started on the Lakehouse journey!
Amazon Verified review Amazon
vfortier Oct 27, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Great book about Databricks SQL, how to set it up, how to use it and how to make the best of it. Highly recommended
Amazon Verified review Amazon
Hales Sep 20, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book provides both high-level context and detailed examples and explanations, which are useful for novice to experienced BI practitioners and data engineers supporting BI workloads. As the book does not cover set-up, it will be most beneficial for those with access to an existing environment and some experience with SQL. The book is written in simple, easy-to-understand explanations and loaded with screenshots, code snippets, and diagrams to support the concepts covered.The book's first section is helpful if you are new to cloud architecture and jump into the product with a tour. The data and security chapters cover storing and accessing data and the latest Unity catalog components. While the book doesn’t go deep on user provisioning, it has enough detail to understand privileges to secure your tables, views, and dashboards. The SQL warehouse chapter dives into the architecture in the cloud and includes case studies around how to scale your data and manage performance. The BI section has code and screenshots using built-in datasets that you can follow along to build a dashboard or connect to an external tool. The second section goes deep into the nuts and bolts of Databricks SQL, optimizations and performance features, and the underlying engines. It ends with an example implementation of a cloud data warehouse using the lakehouse architecture and is geared towards people familiar with data modeling and warehouse design.The third section has SQL commands with many code examples, including CDC commands, time travel, and dealing with nested data. The last section has exercises around a TPC-DS dataset, using IDEs, more case studies, and FAQs. Note that you will need the jar file from GitHub. The last two sections are dense with information but are still easy to follow.One thing worth mentioning is that the features outlined in this book are not available in the free Databricks Community edition, so you will need a standard or enterprise account to follow along with the examples and create the dashboard, alerts, etc.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.