Optimization techniques
If we have trained a simple ensemble model that performs reasonably better than the baseline model and achieves acceptable performance according to the expected performance estimated during data preparation, we can progress with optimization. This is a point we really want to emphasize. It's strongly discouraged to begin model optimization and stacking when a simple ensemble technique fails to deliver useful results. If this is the case, it would be much better to take a step back and dive deeper into data analysis and feature engineering.
Common ML optimization techniques, such as hyperparameter optimization, model stacking, and even automated machine learning, help you get the last 10% of performance boost out of your model while the remaining 90% is achieved by a single ensemble model. If you decide to use any of those optimization techniques, it is advised to perform them in parallel and fully automated on a distributed cluster.
After seeing too...