Using convolutional neural network ensembles to improve accuracy
In machine learning, one of the most robust classifiers is, in fact, a meta-classifier, known as an ensemble. An ensemble is comprised of what's known as weak classifiers, predictive models just a tad better than random guessing. However, when combined, they result in a rather robust algorithm, especially against high variance (overfitting). Some of the most famous examples of ensembles we may encounter include Random Forest and Gradient Boosting Machines.
The good news is that we can leverage the same principle when it comes to neural networks, thus creating a whole that's more than the sum of its parts. Do you want to learn how? Keep reading!
Getting ready
This recipe depends on Pillow
and tensorflow_docs
, which can be easily installed like this:
$> pip install Pillow git+https://github.com/tensorflow/docs
We'll also be using the famous Caltech 101
dataset, available here: http://www...