Amazon Web Services (AWS) is the most popular cloud solution. If you don't have access to a local GPU or if you prefer to use a server, you can set up an EC2 instance on AWS. In this recipe, we provide steps to launch a GPU-enabled server.
Launching an instance on Amazon Web Services (AWS)
Getting ready
Before we move on with this recipe, we assume that you already have an account on Amazon AWS and that you are familiar with its platform and the accompanying costs.
How to do it...
- Make sure the region you want to work in gives access to P2 or G3 instances. These instances include NVIDIA K80 GPUs and NVIDIA Tesla M60 GPUs, respectively. The K80 GPU is faster and has more GPU memory than the M60 GPU: 12 GB versus 8 GB.
While the NVIDIA K80 and M60 GPUs are powerful GPUs for running deep learning models, these should not be considered state-of-the-art. Other faster GPUs have already been launched by NVIDIA and it takes some time before these are added to cloud solutions. However, a big advantage of these cloud machines is that it is straightforward to scale the number of GPUs attached to a machine; for example, Amazon's p2.16xlarge instance has 16 GPUs.
- There are two options when launching an AWS instance. Option 1: You build everything from scratch. Option 2: You use a preconfigured Amazon Machine Image (AMI) from the AWS marketplace. If you choose option 2, you will have to pay additional costs. For an example, see this AMI at https://aws.amazon.com/marketplace/pp/B06VSPXKDX.
- Amazon provides a detailed and up-to-date overview of steps to launch the deep learning AMI at https://aws.amazon.com/blogs/ai/get-started-with-deep-learning-using-the-aws-deep-learning-ami/.
- If you want to build the server from scratch, launch a P2 or G3 instance and follow the steps under the Installing CUDA and cuDNN and Installing Anaconda and Libraries recipes.
- Always make sure you stop the running instances when you're done to prevent unnecessary costs.
A good option to save costs is to use AWS Spot instances. This allows you to bid on spare Amazon EC2 computing capacity.