Weak supervision
Deep learning models have delivered incredible results in the recent past. Deep learning architectures obviated the need for feature engineering, given enough training data. However, enormous amounts of data are needed for a deep learning model to learn the underlying structure of the data. On the one hand, deep learning reduced the manual effort required to handcraft features, but on the other, it significantly increased the need for labeled data for a specific task. In most domains, gathering a sizable set of high-quality, labeled data is an expensive and resource-intensive task.
This problem can be solved in several different ways. In previous chapters, we have seen the use of transfer learning to train a model on a large dataset before fine-tuning the model for a specific task. Figure 8.1 shows this and other approaches to acquiring labels:
Figure 8.1: Options for getting more labeled data
Hand labeling the data is a common approach...