Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Comet for Data Science

You're reading from   Comet for Data Science Enhance your ability to manage and optimize the life cycle of your data science project

Arrow left icon
Product type Paperback
Published in Aug 2022
Publisher Packt
ISBN-13 9781801814430
Length 402 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Angelica Lo Duca Angelica Lo Duca
Author Profile Icon Angelica Lo Duca
Angelica Lo Duca
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Section 1 – Getting Started with Comet
2. Chapter 1: An Overview of Comet FREE CHAPTER 3. Chapter 2: Exploratory Data Analysis in Comet 4. Chapter 3: Model Evaluation in Comet 5. Section 2 – A Deep Dive into Comet
6. Chapter 4: Workspaces, Projects, Experiments, and Models 7. Chapter 5: Building a Narrative in Comet 8. Chapter 6: Integrating Comet into DevOps 9. Chapter 7: Extending the GitLab DevOps Platform with Comet 10. Section 3 – Examples and Use Cases
11. Chapter 8: Comet for Machine Learning 12. Chapter 9: Comet for Natural Language Processing 13. Chapter 10: Comet for Deep Learning 14. Chapter 11: Comet for Time Series Analysis 15. Other Books You May Enjoy

Exploring model evaluation techniques

Depending on the problem we want to solve, there are different model evaluation techniques. In this section, we will consider three types of problems: regression, classification, and clustering.

The first two problems fall within the scope of supervised learning, while the third method falls within the scope of unsupervised learning.

In this section, you will review the main metrics used for model evaluation in the previously cited problems. We will implement a practical example in Python to illustrate how to calculate each metric. To review the main evaluation metrics, we will use only two datasets: the training and test sets.

Regarding supervised learning, there is also an additional technique to perform model evaluation. This technique is called cross validation. The basic idea behind cross validation is to split an original dataset into several subsets. The model trains all the subsets, except one. When the training phase is completed...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime