This book is dedicated to developers, data analysts, and deep learning enthusiasts who do not have much background with complex numerical computations but want to know what deep learning is. A strong understanding of Scala and its functional programming concepts is recommended. Some basic understanding and high-level knowledge of Spark ML, H2O, Zeppelin, DeepLearning4j, and MXNet would act as an added advantage in order to grasp this book. Additionally, basic know-how of build tools such as Maven and SBT is assumed.
All the examples have been implemented using Scala on an Ubuntu 16.04 LTs 64-bit and Windows 10 64-bit. You will also need the following (preferably the latest versions):
- Apache Spark 2.0.0 (or higher)
- MXNet, Zeppelin, DeepLearning4j, and H2O (see the details in the chapter and in the supplied pom.xml files)
- Hadoop 2.7 (or higher)
- Java (JDK and JRE) 1.7+/1.8+
- Scala 2.11.x (or higher)
- Eclipse Mars or Luna (latest) with Maven plugin (2.9+), Maven compiler plugin (2.3.2+), and Maven assembly plugin (2.4.1+)
- IntelliJ IDE
- SBT plugin and Scala Play Framework installed
A computer with at least a Core i3 processor, Core i5 (recommended), or Core i7 (to get the best results) is needed. However, multicore processing will provide faster data processing and scalability. At least 8 GB RAM is recommended for standalone mode; use at least 32 GB RAM for a single VM and higher for a cluster. You should have enough storage for running heavy jobs (depending on the dataset size you will be handling); preferably, at least 50 GB of free disk storage (for standalone and for SQL Warehouse).
Linux distributions are preferable (including Debian, Ubuntu, Fedora, RHEL, CentOS, and many more). To be more specific, for example, for Ubuntu it is recommended to have a 14.04 (LTS) 64-bit (or later) complete installation, VMWare player 12, or VirtualBox. You can run Spark jobs on Windows (XP/7/8/10) or Mac OS X (10.4.7+).