Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Iterative Machine Learning: A step towards Model Accuracy

Save for later
  • 10 min read
  • 01 Dec 2017

article-image
Learning something by rote i.e., repeating it many times, perfecting a skill by practising it over and over again or building something by making minor adjustments progressively to a prototype are things that comes to us naturally as human beings. Machines can also learn this way and this is called ‘Iterative machine learning’. In most cases, iteration is an efficient learning approach that helps reach the desired end results faster and accurately without becoming a resource crunch nightmare.

Now, you might wonder, isn’t iteration inherently part of any kind of machine learning? In other words, modern day machine learning techniques across the spectrum from basic regression analysis, decision trees, Bayesian networks, to advanced neural nets and deep learning algorithms have some inherent iterative component built into them. What is the need, then, for discussing iterative learning as a standalone topic? This is simply because introducing iteration externally to an algorithm can minimize the error margin and therefore help in accurate modelling. 

How Iterative Learning works

Let’s understand how iteration works by looking closely at what happens during a single iteration flow within a machine learning algorithm.

A pre-processed training dataset is first introduced into the model. After processing and model building with the given data, the model is tested, and then the results are matched with the desired result/expected output. The feedback is then returned back to the system for the algorithm to further learn and fine tune its results. This clearly shows that two iteration processes take place here:

  1. Data Iteration - Inherent to the algorithm
  2. Model Training Iteration - Introduced externally

iterative-machine-learning-step-towards-model-accuracy-img-0

Now, what if we did not feedback the results into the system i.e. did not allow the algorithm to learn iteratively but instead adopted a sequential approach? Would the algorithm work and would it provide the right results?

Yes, the algorithm would definitely work. However, the quality of the results it produces is going to vary vastly based on a number of factors. The quality and quantity of the training dataset, the feature definition and extraction techniques employed, the robustness of the algorithm itself are among many other factors. Even if all of the above were done perfectly, there is still no guarantee that the results produced by a sequential approach will be highly accurate. In short, the results will neither be accurate nor reproducible. Iterative learning thus allows algorithms to improve model accuracy.

Certain algorithms have iteration central to their design and can be scaled as per the data size. These algorithms are at the forefront of machine learning implementations because of their ability to perform faster and better. In the following sections we will discuss iteration in different sets of algorithms each from the three main machine learning approaches - supervised ML, unsupervised ML and reinforcement learning.

The Boosting algorithms: Iteration in supervised ML

The boosting algorithms, inherently iterative in nature, are a brilliant way to improve results by minimizing errors. They are primarily designed to reduce bias in results and transform a particular set of weak learning classifier algorithms to strong learners and to enable them to reduce errors. Some examples are:

  • AdaBoost (Adaptive Boosting)
  • Gradient Tree Boosting
  • XGBoost

How they work

All boosting algorithms have a common classifiers which are iteratively modified to reach the desired result. Let’s take the example of finding cases of plagiarism in a certain article. The first classifier here would be to find a group of words that appear somewhere else or in another article which would result in a red flag. If we create 10 separate group of words and term them as classifiers 1 to 10, then our article will be checked on the basis of this classifier and any possible matches will be red flagged. But no red flags with these 10 classifiers would not mean a definite 100% original article. Thus, we would need to update the classifiers, create shorter groups perhaps based on the first pass and improve the accuracy with which the classifiers can find similarity with other articles. This iteration process in Boosting algorithms eventually leads us to a fairly high rate of accuracy. The reason being after each iteration, the classifiers are updated based on their performance. The ones which have close similarity with other content are updated and tweaked so that we can get a better match. This process of improving the algorithm inherently, is termed as boosting and is currently one of the most popular methods in Supervised Machine Learning.

Strengths & weaknesses

The obvious advantage of this approach is that it allows minimal errors in the final model as the iteration enables the model to correct itself every time there is an error. The downside is the higher processing time and the overall memory requirement for a large number of iterations. Another important aspect is that the error fed back to train the model is done externally, which means the supervisor has control over the model and how it modifies. This in turn has a downside that the model doesn’t learn to eliminate error on its own. Hence, the model is not reusable with another set of data. In other words, the model does not learn how to become error-free by itself and hence cannot be ported to another dataset as it would need to start the learning process from scratch.

Artificial Neural Networks: Iteration in unsupervised ML

Neural Networks have become the poster child for unsupervised machine learning because of their accuracy in predicting data models. Some well known neural networks are:

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at R$50/month. Cancel anytime
  • Convolutional Neural Networks  
  • Boltzmann Machines
  • Recurrent Neural Networks
  • Deep Neural Networks
  • Memory Networks

How they work

Artificial neural networks are highly accurate in simulating data models mainly because of their iterative process of learning. But this process is different from the one we explored earlier for Boosting algorithms. Here the process is seamless and natural and in a way it paves the way for reinforcement learning in AI systems.

Neural Networks consist of electronic networks simulating the way the human brain is works. Every network has an input and output node and in-between hidden layers that consist of algorithms. The input node is given the initial data set to perform a set of actions and each iteration creates a result that is output as a string of data. This output is then matched with the actual result dataset and the error is then fed back to the input node. This error then enables the algorithms to correct themselves and reach closer and closer to the actual dataset. This process is called training the Neural Networks and each iteration improve the accuracy. The key difference between the iteration performed here as compared to how it is performed by Boosting algorithms is that here we don’t have to update the classifiers manually, the algorithms change themselves based on the error feedback.

Strengths & weaknesses

The main advantage of this process is obviously the level of accuracy that it can achieve on its own. The model is also reusable because it learns the means to achieve accuracy and not just gives you a direct result. The flip side of this approach is that the models can go wrong heavily and deviate completely in a different direction. This is because the induced iteration takes its own course and doesn’t need human supervision. The facebook chat-bots deviating from their original goal and communicating within themselves in a language of their own is a case in point. But as is the saying, smart things come with their own baggage. It’s a risk we would have to be ready to tackle if we want to create more accurate models and smarter systems.   

Reinforcement Learning

Reinforcement learning is a interesting case of machine learning where the simple neural networks are connected and together they interact with the environment to learn from their mistakes and rewards. The iteration introduced here happens in a complex way. The iteration happens in the form of reward or punishment for arriving at the correct or wrong results respectively. After each interaction of this kind, the multilayered neural networks incorporate the feedback, and then recreate the models for better accuracy. The typical type of reward and punishment method somewhat puts it in a space where it is neither supervised nor unsupervised, but exhibits traits of both and also has the added advantage of producing more accurate results. The con here is that the models are complex by design. Multilayered neural networks are difficult to handle in case of multiple iterations because each layer might respond differently to a certain reward or punishment. As such it may create inner conflict that might lead to a stalled system - one that can’t decide which direction to move next.

Some Practical Implementations of Iteration

Many modern day machine learning platforms and frameworks have implemented the iteration process on their own to create better data models, Apache Spark and MapR are two such examples. The way the two implement iteration is technically different and they have their merits and limitations.

Let’s look at MapReduce. It reads and writes data directly onto HDFS filesystem present on the disk. Note that for every iteration to be read and written from the disk needs significant time. This in a way creates a more robust and fault tolerant system but compromises on the speed. On the other hand, Apache Spark stores the data in memory (Resilient Distributed DataSet) i.e. in the RAM. As a result, each iteration takes much less time which enables Spark to perform lightning fast data processing. But the primary problem with the Spark way of doing iteration is that dynamic memory or RAM is much less reliable than disk memory to store iteration data and perform complex operations. Hence it’s much less fault tolerant that MapR.  

Bringing it together

To sum up the discussion, we can look at the process of iteration and its stages in implementing machine learning models roughly as follows:

iterative-machine-learning-step-towards-model-accuracy-img-1

  1. Parameter Iteration: This is the first and inherent stage of iteration for any algorithm. The parameters involved in a certain algorithm are run multiple times and the best fitting parameters for the model are finalized in this process.
  2. Data Iteration: Once the model parameters are finalized, the data is put into the system and the model is simulated. Multiple sets of data are put into the system to check the parameters’ effectiveness in bringing out the desired result. Hence, if data iteration stage suggests that some of the parameters are not well suited for the model, then they are taken back to the parameter iteration stage and parameters are added or modified.
  3. Model Iteration: After the initial parameters and data sets are finalized, the model testing/ training happens. The iteration in model testing phase is all about running the same model simulation multiple times with the same parameters and data set, and then checking the amount of error, if the error varies significantly in every iteration, then there is something wrong with either the data or the parameter or both. Iterations are done to data and parameters until the model achieves accuracy.  
  4. Human Iteration: This step involves the human induced iteration where different models are put together to create a fully functional smart system. Here, multiple levels of fitting and refitting happens to achieve a coherent overall goal such as creating a driverless car system or a fully functional AI.

Iteration is pivotal to creating smarter AI systems in the near future. The enormous memory requirements for performing multiple iterations on complex data sets continue to pose major challenges. But with increasingly better AI chips, storage options and data transfer techniques, these challenges are getting easier to handle.

We believe iterative machine learning techniques will continue to lead the transformation of the AI landscape in the near future.