Deploying a model with Amazon Elastic Inference
When deploying a model, you have to decide whether it should run on a CPU instance, or on a GPU instance. In some cases, there isn't much of a debate. For example, some algorithms simply don't benefit from GPU acceleration, so they should be deployed to CPU instances. At the other end of the spectrum, complex deep learning models for Computer Vision or Natural Language Processing run best on GPUs.
In many cases, the situation is not that clear-cut. First, you should know what the maximum predicted latency is for your application. If you're predicting click-through rate for a real-time ad tech application, every millisecond counts. If you're predicting customer churn in a back-office application, not so much.
In addition, even models that could benefit from GPU acceleration may not be large and complex enough to fully utilize the thousands of cores available on a modern GPU. In such scenarios, you're stuck...