The curse of dimensionality is not a new term or concept. The term was originally coined by R. Bellman when tackling problems in dynamic programming (the Bellman equation). The core concepts in machine learning refer to the problem that as we increase the number of dimensions (axes or features), the number of training data (samples) remains the same (or relatively low), which causes less accuracy in our predictions. This phenomenon is also referred to as the Hughes Effect, named after G. Hughes, which talks about the problem caused by rapid (exponential) increase of search space as we introduce more and more dimensions to the problem space. It is a bit counterintuitive, but if the number of samples does not expand at the same rate as you add more dimensions, you actually end up with a less accurate model!
In a nutshell, most machine learning algorithms are statistical...