Summary
In this chapter, we showed how WebSockets can help us bring a more interactive experience to users. Thanks to the pretrained models provided by the Hugging Face community, we were able to quickly implement an object detection system. Then, we integrated it into a WebSocket endpoint with the help of FastAPI. Finally, by using a modern JavaScript API, we sent video input and displayed algorithm results directly in the browser. All in all, a project like this might sound complex to make at first, but we saw that powerful tools such as FastAPI enable us to get results in a very short time and with very comprehensible source code.
Until now, in our different examples and projects, we assumed the ML model we used was fast enough to be run directly in an API endpoint or a WebSocket task. However, that’s not always the case. In some cases, the algorithm is so complex it takes a couple of minutes to run. If we run this kind of algorithm directly inside an API endpoint, the...