We are discussing setting up DL4J again because we are now dealing with a distributed environment. For demonstration purposes, we will use Spark's local mode. Due to this, we can focus on DL4J rather than setting up clusters, worker nodes, and so on. In this recipe, we will set up a single node Spark cluster (Spark local), as well as configure DL4J-specific dependencies.
Setting up DL4J and the required dependencies
Getting ready
In order to demonstrate the use of a distributed neural network, you will need the following:
- A distributed filesystem (Hadoop) for file management
- Distributed computing (Spark) in order to process big data