All the steps we performed up to now remain the same for the cloud as well, but there are a few additional modules required to configure the cloud virtual machines to make your DL applications servable and scalable. So, before setting up your server, follow the instructions from the preceding section.
To deploy your DL applications in the cloud, you will need a server good enough to train your models and serve at the same time. With huge development in the sphere of DL, the need for cloud servers to practice and deploy projects has increased drastically, and so have the options on the market. The following is a list of some of the best options on offer:
- Paperspace (https://www.paperspace.com/)
- FloydHub (https://www.floydhub.com)
- Amazon Web Services (https://aws.amazon.com/)
- Google Cloud Platform (https://cloud.google.com/)
- DigitalOcean (https://cloud.digitalocean.com/)
All of these options have their own pro and cons, and the final choice totally depends on your use case and preferences, so feel free to explore more. In this book, we will build and deploy our models mostly on Google Compute Engine (GCE), which is a part of Google Cloud Platform (GCP). Follow the steps mentioned in this chapter to spin up a VM server and get started.