To recollect, we were using a class-imbalanced dataset to build the attrition model. Using techniques to resolve the class imbalance prior to model building is another key aspect of getting better model performance measurements. We used bagging, randomization, boosting, and stacking to implement and predict the attrition model. We were able to accomplish 91% accuracy just by using the features that were readily available in the models. Feature engineering is a crucial aspect whose role cannot be ignored in ML models. This may be one other path to explore to improve model performance further.
In the next chapter, we will explore the secret recipe of recommending products or content through building a personalized recommendation engines. I am all set to implement a project to recommend jokes. Turn to the next chapter to continue the journey of learning.