Let's start by first looking at a formal definition for transfer learning and then utilize it to understand different strategies. In their paper, A Survey on Transfer Learning (https://www.cse.ust.hk/~qyang/Docs/2009/tkde_transfer_learning.pdf), Pan and Yang use domain, task, and marginal probabilities to present a framework for understanding transfer learning. The framework is defined as follows:
A domain, D, is defined as a two-element tuple consisting of feature space, , and marginal probability, P(Χ), where Χ is a sample data point.
Here, Χ = {x1, x2....xn} with xi as a specific vector and Χ . Thus:
A task, T, on the other hand, can be defined as a two-element tuple of the label space, γ, and objective function, f. The objective function can also be denoted as P(γ| Χ) from a probabilistic view point...