Throughout this chapter, we covered the Emotion API and the Video API. We started by making the smart-house application see what kind of mood you are in. Following this, we dived deep into the Video API, where you learned how to detect and track faces, detect motion, stabilize videos, and generate intelligent video thumbnails. Next, we moved back to the Emotion API, where you learned how to do emotion analysis on videos. Finally, we closed the chapter by taking a quick look at the Video Indexer and how it can be used to gain insights from videos.
In the next chapter, we will move away from the Vision APIs and into the first Language API. You will learn how to understand intent in sentences, using the power of LUIS.