As you might recall, in an earlier section, we briefly discussed distributed systems. There, we discussed the scenario where the machine learning-based computation is primarily performed on host servers. Here, we will look at the scenario where these computations are performed on the user side, in the browser. Two significant advantages of doing this are as follows:
- Compute gets pushed to the user side. Hosts do not have to worry about managing servers for performing computations.
- Pushing models to the user side means that user data doesn't have to be sent to the host. This is a huge advantage for applications that work with sensitive or private user data. Inference in the browser hence becomes an excellent choice for privacy-critical machine learning applications:
The workflow described in the preceding diagram illustrates the end-to-end pipeline...