Machine learning is focused on writing software that can learn from past experience. One of the standard definitions of machine learning, as given by Tom Mitchell, a professor at the Carnegie Mellon University (CMU), is the following:
For example, a computer program that learns to play chess might improve its performance as measured by its ability to win at the class of tasks involving playing chess, through experience obtained by playing chess against itself. In general, to have a well-defined learning problem, we must identify the class of tasks, the measure of performance to be improved, and the source of experience. Consider that a chess-learning problem consists of the following: task, performance measure, and training experience, where:
- Task T is playing chess
- Performance measure P is the percentage of games won against opponents
- Training experience EÂ is the program playing practice chess games against itself
To put it in simple terms, if a computer program is able to improve the way it performs a task with the help of previous experience, this way you will know the computer has learned. This scenario is very different from one where a program can perform a particular task because its programmers have already defined all the parameters and have provided the data required to do so. A normal program can perform the task of playing chess because the programmers have written the code to play chess with a built-in winning strategy. However, a machine learning program does not possess a built-in strategy; in fact, it only has a set of rules of the legal moves in the game, and what a winning scenario is. In such a case, the program needs to learn by repeatedly playing the game until it can win.