What is DL?
It has only been a decade since DL emerged but it has rapidly started playing an important role in our daily lives. Within the field of artificial intelligence (AI), a popular set of methods categorized as machine learning (ML) exists. By extracting meaningful patterns from historical data, the goal of ML is to build a model that makes sensible predictions and decisions for newly collected data. DL is an ML technique that exploits artificial neural networks (ANNs) to capture a given pattern. Figure 1.1 presents the key components of the AI evolution that started around 1950s, with Alan Turing conducting discussions about intelligent machines, among other godfathers of the field. While various ML algorithms have been introduced sporadically since the advent of AI, it actually took another decades for the field to bloom. Similarly, it has only been about a decade since DL has became the main stream because many of the algorithms in this field require extensive computational power.
Figure 1.1 – A history of AI
As shown in Figure 1.2, the key advantage of DL comes from ANNs, which enable the automatic selection of necessary features. Similar to the way that human brains are structured, ANNs are also made up of components called neurons. A group of neurons forms a layer and multiple layers are linked together to form a network. This kind of architecture can be understood as multiple steps of nested instructions. As the input data passes through the network, each neuron extracts different information, and the model is trained to select the most relevant features for the given task. Considering the role of neurons as building blocks for pattern recognition, deeper networks generally lead to greater performance, as they learn the details better:
Figure 1.2 – The difference between ML and DL
While typical ML techniques require features to be manually selected, DL learns to select important features on its own. Therefore, it can potentially be adapted to a broader range of problems. However, this advantage does not come for free. In order to train a DL model properly, the datasets for training need to be large and diverse enough, which leads to an increase in training time. Interestingly, graphics processing unit (GPU) has played a major role in reducing the training time. While a central processing unit (CPU) demonstrates its effectiveness in carrying out complex computations with its broader instruction sets, a GPU is more suitable for processing simpler but larger computations with its massive parallelism. By exploiting such an advantage in the matrix multiplications that the DL model heavily depends on, GPU has become a critical component within DL.
As we are still in the early stages of the AI era, chip technology is evolving continuously, and it is not yet clear how these technologies will change in the future. It is worth mentioning that new designs come from start-ups as well as big tech companies. This ongoing race clearly shows that more and more products and services based on AI will be introduced. Considering the growth in the market and job opportunities, we believe that it is a great time to learn about DL.
Things to remember
a. DL is an ML technique that exploits ANNs for pattern recognition.
b. DL is highly flexible because it selects the most relevant features automatically for the given task throughout training.
c. GPUs can speed up DL model training with its massive parallelism.
Now that we understand what DL is at a high level, we will describe how it shapes our daily lives in the next section.