Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learn OpenAI Whisper

You're reading from   Learn OpenAI Whisper Transform your understanding of GenAI through robust and accurate speech processing solutions

Arrow left icon
Product type Paperback
Published in May 2024
Publisher Packt
ISBN-13 9781835085929
Length 372 pages
Edition 1st Edition
Concepts
Arrow right icon
Author (1):
Arrow left icon
Josué R. Batista Josué R. Batista
Author Profile Icon Josué R. Batista
Josué R. Batista
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Part 1: Introducing OpenAI’s Whisper FREE CHAPTER
2. Chapter 1: Unveiling Whisper – Introducing OpenAI’s Whisper 3. Chapter 2: Understanding the Core Mechanisms of Whisper 4. Part 2: Underlying Architecture
5. Chapter 3: Diving into the Whisper Architecture 6. Chapter 4: Fine-Tuning Whisper for Domain and Language Specificity 7. Part 3: Real-world Applications and Use Cases
8. Chapter 5: Applying Whisper in Various Contexts 9. Chapter 6: Expanding Applications with Whisper 10. Chapter 7: Exploring Advanced Voice Capabilities 11. Chapter 8: Diarizing Speech with WhisperX and NVIDIA’s NeMo 12. Chapter 9: Harnessing Whisper for Personalized Voice Synthesis 13. Chapter 10: Shaping the Future with Whisper 14. Index 15. Other Books You May Enjoy

Milestone 7 – Executing the training loops

To begin training, just run the following command:

trainer.train()

Figure 4.1 shows an example of the output you can expect to see from the trainer.train() command’s execution:

Figure 4.1 – Sample output from trainer.train() in Google Colab

Figure 4.1 – Sample output from trainer.train() in Google Colab

Each training batch will have an evaluation step that calculates and displays training/validation losses and WER metrics. Depending on your GPU, training could take 5–10 hours. If you run into memory issues, try reducing the batch size and adjusting gradient_accumulation_steps in the declaration of Seq2SeqTrainingArguments.

Because of the parameters we established when declaring Seq2SeqTrainingArguments, our model metrics and performance will be pushed to the Hugging Face Hub with each training iteration. The key parameters driving that push to the Hub are shown here:

from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime