In this chapter, you'll be adding a microphone and speaker, but more than that, you'll add functionality so that your project can both recognize voice commands and also respond via the speaker. This will free you up from typing in commands and let you interact with your projects in an impressive way.
Besides, what self-respecting robot wants to carry around a keyboard? No, you want to interact in natural ways with your projects, and what you learn in this chapter will enable that.
In this chapter, you'll learn about the following topics:
- Hooking up speakers and a microphone to make and input sound
- Using eSpeak to allow your projects to respond in a robot voice
- Using Pocketsphinx to interpret your commands
- Providing the capability to interpret your commands and have your robot initiate action