Applying automation to machine learning
We've covered the idea of automation and various types of automation thus far, but what's the connection between automation and machine learning? What exactly is it that we are trying to automate in machine learning?
That's what this section aims to demystify. By the end of this section, you will know the difference between the terms automation with machine learning and automating machine learning. These two might sound similar at first, but are very different in reality.
What are we trying to automate?
Let's get one thing straight – automation of machine learning processes has nothing to do with business process automation with machine learning. In the former, we're trying to automate the machine learning itself, ergo automating the process of selecting the best model and the best hyperparameters. The latter refers to automating a business process with the help of machine learning; for example, making a decision system that decides when to buy or sell a stock based on historical data.
It's crucial to remember this distinction. The primary focus of this book is to demonstrate how automation libraries can be used to automate the process of machine learning. By doing so, you will follow the exact same approach, regardless of the dataset, and always end up with the best possible model.
Choosing an appropriate machine learning algorithm isn't an easy task. Just take a look at the following diagram:
As you can see, multiple decisions are required to select an appropriate algorithm. In addition, every algorithm has its own set of hyperparameters (parameters specified by the engineer). To make things even worse, some of these hyperparameters are continuous in nature, so when you add it all up, there are hundreds of thousands or even millions of hyperparameter combinations that you as an engineer should test for.
Every hyperparameter combination requires training and evaluation of a completely new model. Concepts such as grid search can help you avoid writing tens of nested loops, but it is far from an optimal solution.
Modern machine learning engineers don't spend their time and energy on model training and optimization – but instead on raising the data quality and availability. Hyperparameter tweaking can squeeze that additional 2% increase in accuracy, but it is the data quality that can make or break your project.
We'll dive a bit deeper into hyperparameters next and demonstrate why searching for the optimal ones manually isn't that good an idea.
The problem of too many parameters
Let's take a look at some of hyperparameters available for one of the most popular machine learning algorithms – XGBoost
. The following list shows the general ones:
booster
verbosity
validate_parameters
nthread
disable_default_eval_metric
num_pbuffer
num_feature
That's not much, and some of these hyperparameters are set automatically by the algorithm. The problem lies within the further selection. For example, if you choose gbtree
as a value for the booster
parameter, you can immediately tweak the values for the following:
eta
gamma
max_depth
min_child_weight
max_delta_step
subsample
sampling_method
colsample_bytree
colsample_bylevel
colsample_bynode
lambda
alpha
tree_method
sketch_eps
scale_pos_weight
updater
refresher_leaf
process_type
grow_policy
max_leaves
max_bin
predictor
num_parallel_tree
monotone_constraints
interaction_constraints
And that's a lot! As mentioned before, some hyperparameters take in continuous values, which tremendously increases the total number of combinations. Here's the final icing on the cake – these are only hyperparameters for a single model. Different models have different hyperparameters, which makes the tuning process that much more time consuming.
Put simply, model selection and hyperparameter tuning isn't something you should do manually. There are more important tasks to spend your energy on. Even if there's nothing else you have to do, I'd prefer going for lunch instead of manual tuning any day of the week.
AutoML enables us to do just that, so we'll explore it briefly in the next section.
What is AutoML?
AutoML stands for Automated Machine Learning, and its primary goal is to reduce or completely eliminate the role of data scientists in building machine learning models. Hearing that sentence might be harsh at first. I know what you are thinking. But no – AutoML can't replace data scientists and other data professionals.
In the best-case scenario, AutoML technologies enable other software engineers to utilize the power of machine learning in their application, without the need to have a solid background in ML. This best-case scenario is only possible if the data is adequately gathered and prepared – a task that's not the specialty of a backend developer.
To make things even harder for the non-data scientist, the machine learning process often requires extensive feature engineering. This step can be skipped, but more often than not, this will result in poor models.
In conclusion, AutoML won't replace data scientists, rather just the contrary – it's here to make the life of data scientists easier. AutoML only automates model selection and tuning to the full extent.
There are some AutoML services that advertise themselves as fully automating even the data preparation and feature engineering jobs, but that's just by combining various features together and making something that is not interpretable most of the time. A machine doesn't know the true relationships between variables. That's the job of a data scientist to discover.