Summary
In this chapter, we took a deep dive into a big part of the vision APIs. You first learned how to get good descriptions of images. Next, you learned how to recognize celebrities and text in images, and you learned how to generate thumbnails. Following this, we moved on to the Face API, where we got more information about detected faces. We found out how to verify whether two faces were the same. After this, you learned how to find similar faces and group similar faces. Then we added identification to our smart-house application, allowing it to know who we are. We also added the ability to recognize emotions in faces. We took a quick look into the content moderator to see how you can add automatic moderation to user-generated content. Finally, we briefly looked at the Custom Vision service, and how you can use it to generate specific prediction models.
The next chapter will continue with the final vision API. We will focus on videos, learning what the video indexer API has to offer...