Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Practical Machine Learning on Databricks

You're reading from   Practical Machine Learning on Databricks Seamlessly transition ML models and MLOps on Databricks

Arrow left icon
Product type Paperback
Published in Nov 2023
Publisher Packt
ISBN-13 9781801812030
Length 244 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Debu Sinha Debu Sinha
Author Profile Icon Debu Sinha
Debu Sinha
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Part 1: Introduction
2. Chapter 1: The ML Process and Its Challenges FREE CHAPTER 3. Chapter 2: Overview of ML on Databricks 4. Part 2: ML Pipeline Components and Implementation
5. Chapter 3: Utilizing the Feature Store 6. Chapter 4: Understanding MLflow Components on Databricks 7. Chapter 5: Create a Baseline Model Using Databricks AutoML 8. Part 3: ML Governance and Deployment
9. Chapter 6: Model Versioning and Webhooks 10. Chapter 7: Model Deployment Approaches 11. Chapter 8: Automating ML Workflows Using Databricks Jobs 12. Chapter 9: Model Drift Detection and Retraining 13. Chapter 10: Using CI/CD to Automate Model Retraining and Redeployment 14. Index 15. Other Books You May Enjoy

Running AutoML on our churn prediction dataset

Let’s take a look at how to use Databricks AutoML with our bank customer churn prediction dataset.

If you executed the notebooks from Chapter 3, Utilizing the Feature Store, you will have raw data available as a Delta table in your Hive metastore. It has the name raw_data. In the Chapter 3 code, we read a CSV file from our Git repository with raw data, wrote that as a Delta table, and registered it in our integrated metastore. Take a look at cmd 15 in your notebook. In your environment, the dataset can be coming from another data pipeline or uploaded directly to the Databricks workspace using the Upload file functionality.

To view the tables, you need to have your cluster up and running.

Figure 5.1 – The location of the raw dataset

Figure 5.1 – The location of the raw dataset

Let’s create our first Databricks AutoML experiment.

Important note

Make sure that before following the next steps, you have a cluster up and running...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image