In this chapter, we learned about developing probabilistic models within a Bayesian framework that vastly reduces data requirements and achieves human-level performance. From the example of the handwritten characters discussed previously, we also observed how probabilistic models can not only learn how to classify characters but learn the underlying concept, that is, apply the acquired knowledge in new ways, such as generating similar characters and generating entirely new characters from only a few characters in a set, as well as parsing a character into parts and relations.
However, human learners approach new learning tasks armed with extensive prior experience gained from many experiences with rich overlapping structures. To mimic human learning, the graphical structure needs to have more dependencies and rich inductive biases need to be built into the models. It...