In this chapter, we had a look at the hardware side of deep learning. We also had a look at how CPUs and GPUs serve our computational needs. We also looked at how CUDA, NVIDIA's software, facilitates GPU-accelerated deep learning that is implemented in Gorgonia, and finally, we looked at how to build a model that uses the features implemented by CUDA Gorgonia.
In the next chapter, we will look into vanilla RNNs and the issues involved with RNNs. We will also learn about how to build an LSTM in Gorgonia as well.