Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learn OpenAI Whisper

You're reading from   Learn OpenAI Whisper Transform your understanding of GenAI through robust and accurate speech processing solutions

Arrow left icon
Product type Paperback
Published in May 2024
Publisher Packt
ISBN-13 9781835085929
Length 372 pages
Edition 1st Edition
Concepts
Arrow right icon
Author (1):
Arrow left icon
Josué R. Batista Josué R. Batista
Author Profile Icon Josué R. Batista
Josué R. Batista
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Part 1: Introducing OpenAI’s Whisper FREE CHAPTER
2. Chapter 1: Unveiling Whisper – Introducing OpenAI’s Whisper 3. Chapter 2: Understanding the Core Mechanisms of Whisper 4. Part 2: Underlying Architecture
5. Chapter 3: Diving into the Whisper Architecture 6. Chapter 4: Fine-Tuning Whisper for Domain and Language Specificity 7. Part 3: Real-world Applications and Use Cases
8. Chapter 5: Applying Whisper in Various Contexts 9. Chapter 6: Expanding Applications with Whisper 10. Chapter 7: Exploring Advanced Voice Capabilities 11. Chapter 8: Diarizing Speech with WhisperX and NVIDIA’s NeMo 12. Chapter 9: Harnessing Whisper for Personalized Voice Synthesis 13. Chapter 10: Shaping the Future with Whisper 14. Index 15. Other Books You May Enjoy

Technical requirements

For this chapter, we will leverage Google Colaboratory’s accessibility and economy. Whisper’s small model requires at least 12 GB of GPU memory. Thus, we must try to secure a decent GPU for our Colab! Unfortunately, accessing a good GPU with the free version of Google Colab (with the free version, we get a Tesla T4 16 GB) is becoming much harder. However, with Google Colab Pro, we should have no issues in being allocated a V100 or P100 GPU.

To get a GPU, within Google Colab’s main menu, click Runtime | Change runtime type, then change the Hardware accelerator from None to GPU.

We can verify that we’ve been assigned a GPU and view its specifications by running the following code:

gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
  print('Not connected to a GPU')
else:
  print(gpu_info)

Here’s the output:

Figure 3.1 – Example of the output from gpu_info in Google Colab
...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime