Chapter 12: Boosting Performance
More often than not, the leap between good and great doesn't involve drastic changes, but instead subtle tweaks and fine-tuning.
It is often said that 20% of the effort can get you 80% of the results (this is known as the Pareto principle). But what about that gap between 80% and 100%? What do we need to do to exceed expectations, to improve our solutions, to squeeze as much performance out of our computer vision algorithms as possible?
Well, as with all things deep learning, the answer is a mixture of art and science. The good news is that in this chapter, we'll focus on simple tools you can use to boost the performance of your neural networks!
In this chapter, we will cover the following recipes:
- Using convolutional neural network ensembles to improve accuracy
- Using test time augmentation to improve accuracy
- Using rank-N accuracy to evaluate performance
- Using label smoothing to increase performance
- Checkpointing...