Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Science Projects with Python

You're reading from   Data Science Projects with Python A case study approach to successful data science projects using Python, pandas, and scikit-learn

Arrow left icon
Product type Paperback
Published in Apr 2019
Publisher Packt
ISBN-13 9781838551025
Length 374 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Stephen Klosterman Stephen Klosterman
Author Profile Icon Stephen Klosterman
Stephen Klosterman
Arrow right icon
View More author details
Toc

Table of Contents (9) Chapters Close

Data Science Projects with Python
Preface
1. Data Exploration and Cleaning FREE CHAPTER 2. Introduction toScikit-Learn and Model Evaluation 3. Details of Logistic Regression and Feature Exploration 4. The Bias-Variance Trade-off 5. Decision Trees and Random Forests 6. Imputation of Missing Data, Financial Analysis, and Delivery to Client Appendix

Chapter 6: Imputation of Missing Data, Financial Analysis, and Delivery to Client


Activity 6: Deriving Financial Insights

  1. Using the testing set, calculate the cost of all defaults if there were no counseling program.

    Use this code for the calculation:

    cost_of_defaults = sum(y_test_all) * savings_per_default
    cost_of_defaults

    The output should be:

    Figure 6.59: Cost of all defaults assuming no counseling

  2. Calculate by what percent the cost of defaults can be decreased by the counseling program.

    The potential decrease in cost of default is the greatest possible net savings of the counseling program, divided by the cost of all defaults in the absence of a program:

    net_savings[max_savings_ix]/cost_of_defaults

    The output should be:

    Figure 6.60: Fractional decrease in cost of defaults that could result from a counseling program

    Results indicate that we can decrease the cost of defaults by 23% using a counseling program, guided by predictive modeling.

  3. Calculate the net savings per account at the optimal threshold.

    Use this code for the calculation:

    net_savings[max_savings_ix]/len(y_test_all)

    The output should be:

    Figure 6.61: Net savings per account possible with the counseling program

    Results like these help the client scale the potential amount of savings they could create with the counseling program, to as many accounts as they serve.

  4. Plot the net savings per account against the cost of counseling per account for each threshold.

    Create the plot with this code:

    plt.plot(cost_of_all_counselings/len(y_test_all), net_savings/len(y_test_all))
    plt.xlabel('Upfront investment: cost of counselings per account (NT$)')
    plt.ylabel('Net savings per account (NT$)')

    The resulting plot should appear like this:

    Figure 6.62: The initial cost of the counseling program needed to achieve a given amount of savings

    This indicates how much money the client needs to budget to the counseling program in a given month, to achieve a given amount of savings. It looks like the greatest benefit can be created by budgeting up to about NT$2000 per account. After this, net savings are relatively flat, and then decline. The client may not actually be able to budget this much to the program. However, this graphic gives them evidence to argue for a larger budget if they need to.

    This result corresponds to our graphic from the previous exercise. Although we've shown the optimal threshold is 0.2, it may be fine for the client to use a higher threshold up to about 0.25, thus making fewer positive predictions, offering counseling to fewer account holders, and having a smaller upfront program cost. Figure 6.62 shows how this plays out in terms of cost and net savings per account.

  5. Plot the fraction of accounts predicted as positive (this is called the "flag rate") at each threshold.

    Use this code to plot the flag rate against the threshold:

    plt.plot(thresholds, n_pos_pred/len(y_test_all))
    plt.ylabel('Flag rate')
    plt.xlabel('Threshold')

    The plot should appear as follows:

    Figure 6.63: Flag rate against threshold for the credit counseling program

    This plot shows the fraction of people who will be predicted to default, and therefore recommended outreach at each threshold. It appears that at the optimal threshold of 0.2, only about 30% of accounts will be flagged for counseling. This shows how using a model to prioritize accounts for counseling can help focus on the right accounts and reduce wasted resources. Higher thresholds, which may result in nearly-optimal savings up to a threshold of about 0.25 as shown in Figure 6.36, (Exercise 25, Characterizing Costs and Savings), result in lower flag rates.

  6. Plot a precision-recall curve for the testing data using the following code:

    plt.plot(n_true_pos/sum(y_test_all), np.divide(n_true_pos, n_pos_pred))
    plt.xlabel('Recall')
    plt.ylabel('Precision')

    The plot should look like this:

    Figure 6.64: Precision-recall curve

    Figure 6.64 shows that in order to start getting a true positive rate (that is, recall) much above 0, we need to accept a precision of about 0.75 or lower. So, it appears there is room for improvement in our model. While this would not necessarily be communicated to the client, it shows that by using more advanced modeling techniques, or a richer set of features, model performance could be improved.

    Precision and recall have a direct link to the cost and savings of the program: the more precise our predictions are, the less money we are wasting on counseling due to incorrect model predictions. And, the higher the recall, the more savings we can create by successfully identifying accounts that would default. Compare the code in this step to the code used to calculate cost and savings in the previous exercise to see this. This links the financial analysis to machine learning metrics we have examined earlier in the case study.

    To see the connection of precision and recall with the threshold used to define positive and negative predictions, it can be instructive to plot them separately.

  7. Plot precision and recall separately on the y-axis against threshold on the x-axis.

    Use this code to produce the plot:

    plt.plot(thresholds, np.divide(n_true_pos, n_pos_pred), label='Precision')
    plt.plot(thresholds, n_true_pos/sum(y_test_all), label='Recall')
    plt.xlabel('Threshold')
    plt.legend()

    The plot should appear as follows:

    Figure 6.65: Precision and recall plotted separately against threshold

    This plot sheds some light on why the optimal threshold turned out to be 0.2. While the optimal threshold also depends on the financial analysis of costs and savings, we can see here that the steepest part of the initial increase in precision, which represents the correctness of positive predictions and is therefore a measure of how cost-effective the model-guided counseling can be, happens up to a threshold of about 0.2.

lock icon The rest of the chapter is locked
arrow left Previous Section
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £16.99/month. Cancel anytime