Sparse autoencoders
Distributed sparse representation is one of the primary keys to learn useful features in deep learning algorithms. Not only is it a coherent mode of data representation, but it also helps to capture the generation process of most of the real world dataset. In this section, we will explain how autoencoders encourage sparsity of data. We will start with introducing sparse coding. A code is termed as sparse when an input provokes the activation of a relatively small number of nodes of a neural network, which combine to represent it in a sparse way. In deep learning technology, a similar constraint is used to generate the sparse code models to implement regular autoencoders, which are trained with sparsity constants called sparse autoencoders.
Sparse coding
Sparse coding is a type of unsupervised method to learn sets of overcomplete bases in order to represent the data in a coherent and efficient way. The primary goal of sparse coding is to determine a set of vectors (n) vi...