Similarly, in ML, it is hypothesized that different representations of data allow the capturing of different explanatory factors of variation present therein. The neural networks we saw were excellent at inducing efficient representations from their input values and leveraging these representations for all sorts of learning tasks. Yet, these input values themselves had to undergo a deluge of preprocessing considerations, transforming raw data into a format more palatable to the networks.
Currently, the deficiency of neural networks relates to their heavy dependence on such preprocessing and feature-engineering considerations to learn useful representations from the given data. On their own, they are unable to extract and categorize discriminative elements from raw input values. Often, behind every neural network, there is a human.
We are still...