Summary
In this chapter, we introduced semi-supervised learning, starting from the scenario and the assumptions needed to justify the approaches. We discussed the importance of the smoothness assumption when working with both supervised and semi-supervised classifiers in order to guarantee a reasonable generalization ability. Then we introduced the clustering assumption, which is strictly related to the geometry of the datasets and allows coping with density estimation problems with a strong structural condition. Finally, we discussed the manifold assumption and its importance in order to avoid the curse of dimensionality.
The chapter continued by introducing a generative and inductive model: Generative Gaussian mixtures, which allow clustering labeled and unlabeled samples starting from the assumption that the prior probabilities are modeled by multivariate Gaussian distributions.
The next topic was about a very important algorithm: contrastive pessimistic likelihood estimation, which is...