Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learn OpenAI Whisper

You're reading from   Learn OpenAI Whisper Transform your understanding of GenAI through robust and accurate speech processing solutions

Arrow left icon
Product type Paperback
Published in May 2024
Publisher Packt
ISBN-13 9781835085929
Length 372 pages
Edition 1st Edition
Concepts
Arrow right icon
Author (1):
Arrow left icon
Josué R. Batista Josué R. Batista
Author Profile Icon Josué R. Batista
Josué R. Batista
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Part 1: Introducing OpenAI’s Whisper FREE CHAPTER
2. Chapter 1: Unveiling Whisper – Introducing OpenAI’s Whisper 3. Chapter 2: Understanding the Core Mechanisms of Whisper 4. Part 2: Underlying Architecture
5. Chapter 3: Diving into the Whisper Architecture 6. Chapter 4: Fine-Tuning Whisper for Domain and Language Specificity 7. Part 3: Real-world Applications and Use Cases
8. Chapter 5: Applying Whisper in Various Contexts 9. Chapter 6: Expanding Applications with Whisper 10. Chapter 7: Exploring Advanced Voice Capabilities 11. Chapter 8: Diarizing Speech with WhisperX and NVIDIA’s NeMo 12. Chapter 9: Harnessing Whisper for Personalized Voice Synthesis 13. Chapter 10: Shaping the Future with Whisper 14. Index 15. Other Books You May Enjoy

Leveraging the power of quantization

Quantization in machine learning, particularly in ASR, refers to reducing the precision of the model’s parameters. This is typically done by mapping the continuous range of floating-point values to a discrete set of values, often represented by integers. The primary goal of quantization is to decrease the model’s computational complexity and memory footprint, which is crucial for deploying ASR systems on devices with limited resources, such as mobile phones or embedded systems. Quantization is essential for several reasons:

  • Reducing model size: Using lower precision to represent the model’s weights can significantly reduce the model’s overall size. This is particularly beneficial for on-device deployment, where storage space is at a premium.
  • Improving inference speed: Lower precision arithmetic is faster on many hardware platforms, especially those without dedicated floating-point units. This can lead to faster...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime