Deep learning approaches
Deep learning networks are often described as neural networks that use multiple intermediate layers. Each layer will train on the outputs of a previous layer potentially identifying features and subfeatures of a dataset. The features refer to those aspects of the data that may be of interest. In Chapter 8, Deep Learning, we will examine these types of networks and how they can support several different data science tasks.
These networks often work with unstructured and unlabeled datasets, which is the vast majority of the data available today. A typical approach is to take the data, identify features, and then use these features and their corresponding layers to reconstruct the original dataset, thus validating the network. The Restricted Boltzmann Machines (RBM) is a good example of the application of this approach.
The deep learning network needs to ensure that the results are accurate and minimizes any error that can creep into the process. This is accomplished by adjusting the internal weights assigned to neurons based on what is known as gradient descent. This represents the slope of the weight changes. The approach modifies the weight so as to minimize the error and also speeds up the learning process.
There are several types of networks that have been classified as a deep learning network. One of these is an autoencoder network. In this network, the layers are symmetrical where the number of input values is the same as the number of output values and the intermediate layers effectively compress the data to a single smaller internal layer. Each layer of the autoencoder is a RBM.
This structure is reflected in the following example, which will extract the numbers found in a set of images containing hand-written numbers. The details of the complete example are not shown here, but notice that 1,000 input and output values are used along with internal layers consisting of RBMs. The size of the layers are specified in the nOut
and nIn
methods.
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder() .seed(seed) .iterations(numberOfIterations) .optimizationAlgo( OptimizationAlgorithm.LINE_GRADIENT_DESCENT) .list() .layer(0, new RBM.Builder() .nIn(numberOfRows * numberOfColumns).nOut(1000) .lossFunction(LossFunctions.LossFunction.RMSE_XENT) .build()) .layer(1, new RBM.Builder().nIn(1000).nOut(500) .lossFunction(LossFunctions.LossFunction.RMSE_XENT) .build()) .layer(2, new RBM.Builder().nIn(500).nOut(250) .lossFunction(LossFunctions.LossFunction.RMSE_XENT) .build()) .layer(3, new RBM.Builder().nIn(250).nOut(100) .lossFunction(LossFunctions.LossFunction.RMSE_XENT) .build()) .layer(4, new RBM.Builder().nIn(100).nOut(30) .lossFunction(LossFunctions.LossFunction.RMSE_XENT) .build()) //encoding stops .layer(5, new RBM.Builder().nIn(30).nOut(100) .lossFunction(LossFunctions.LossFunction.RMSE_XENT) .build()) //decoding starts .layer(6, new RBM.Builder().nIn(100).nOut(250) .lossFunction(LossFunctions.LossFunction.RMSE_XENT) .build()) .layer(7, new RBM.Builder().nIn(250).nOut(500) .lossFunction(LossFunctions.LossFunction.RMSE_XENT) .build()) .layer(8, new RBM.Builder().nIn(500).nOut(1000) .lossFunction(LossFunctions.LossFunction.RMSE_XENT) .build()) .layer(9, new OutputLayer.Builder( LossFunctions.LossFunction.RMSE_XENT).nIn(1000) .nOut(numberOfRows * numberOfColumns).build()) .pretrain(true).backprop(true) .build();
Once the model has been trained, it can be used for predictive and searching tasks. With a search, the compressed middle layer can be used to match other compressed images that need to be classified.