The gradient descent and VC Dimension theories
Gradient descent and VC Dimension are two fundamental theories in machine learning. In general, gradient descent gives a structured approach to finding the optimal co-efficients of a function. The hypothesis space of a function can be large and with gradient descent, the algorithm tries to find a minimum (a minima) where the cost function (for example, the squared sum of errors) is the lowest.
VC Dimension provides an upper bound on the maximum number of points that can be classified in a system. It is in essence the measure of the richness of a function and provides an assessment of what the limits of a hypothesis are in a structured way. The number of points that can be exactly classified by a function or hypothesis is known as the VC Dimension of the hypothesis. For example, a linear boundary can accurately classify 2 or 3 points but not 4. Hence, the VC Dimension of this 2-dimensional space would be 3.
VC Dimension, like many other topics...