Chapter 6: Imputation of Missing Data, Financial Analysis, and Delivery to Client
Activity 6: Deriving Financial Insights
Using the testing set, calculate the cost of all defaults if there were no counseling program.
Use this code for the calculation:
cost_of_defaults = sum(y_test_all) * savings_per_default cost_of_defaults
The output should be:
Calculate by what percent the cost of defaults can be decreased by the counseling program.
The potential decrease in cost of default is the greatest possible net savings of the counseling program, divided by the cost of all defaults in the absence of a program:
net_savings[max_savings_ix]/cost_of_defaults
The output should be:
Results indicate that we can decrease the cost of defaults by 23% using a counseling program, guided by predictive modeling.
Calculate the net savings per account at the optimal threshold.
Use this code for the calculation:
net_savings[max_savings_ix]/len(y_test_all)
The output should be:
Results like these help the client scale the potential amount of savings they could create with the counseling program, to as many accounts as they serve.
Plot the net savings per account against the cost of counseling per account for each threshold.
Create the plot with this code:
plt.plot(cost_of_all_counselings/len(y_test_all), net_savings/len(y_test_all)) plt.xlabel('Upfront investment: cost of counselings per account (NT$)') plt.ylabel('Net savings per account (NT$)')
The resulting plot should appear like this:
This indicates how much money the client needs to budget to the counseling program in a given month, to achieve a given amount of savings. It looks like the greatest benefit can be created by budgeting up to about NT$2000 per account. After this, net savings are relatively flat, and then decline. The client may not actually be able to budget this much to the program. However, this graphic gives them evidence to argue for a larger budget if they need to.
This result corresponds to our graphic from the previous exercise. Although we've shown the optimal threshold is 0.2, it may be fine for the client to use a higher threshold up to about 0.25, thus making fewer positive predictions, offering counseling to fewer account holders, and having a smaller upfront program cost. Figure 6.62 shows how this plays out in terms of cost and net savings per account.
Plot the fraction of accounts predicted as positive (this is called the "flag rate") at each threshold.
Use this code to plot the flag rate against the threshold:
plt.plot(thresholds, n_pos_pred/len(y_test_all)) plt.ylabel('Flag rate') plt.xlabel('Threshold')
The plot should appear as follows:
This plot shows the fraction of people who will be predicted to default, and therefore recommended outreach at each threshold. It appears that at the optimal threshold of 0.2, only about 30% of accounts will be flagged for counseling. This shows how using a model to prioritize accounts for counseling can help focus on the right accounts and reduce wasted resources. Higher thresholds, which may result in nearly-optimal savings up to a threshold of about 0.25 as shown in Figure 6.36, (Exercise 25, Characterizing Costs and Savings), result in lower flag rates.
Plot a precision-recall curve for the testing data using the following code:
plt.plot(n_true_pos/sum(y_test_all), np.divide(n_true_pos, n_pos_pred)) plt.xlabel('Recall') plt.ylabel('Precision')
The plot should look like this:
Figure 6.64 shows that in order to start getting a true positive rate (that is, recall) much above 0, we need to accept a precision of about 0.75 or lower. So, it appears there is room for improvement in our model. While this would not necessarily be communicated to the client, it shows that by using more advanced modeling techniques, or a richer set of features, model performance could be improved.
Precision and recall have a direct link to the cost and savings of the program: the more precise our predictions are, the less money we are wasting on counseling due to incorrect model predictions. And, the higher the recall, the more savings we can create by successfully identifying accounts that would default. Compare the code in this step to the code used to calculate cost and savings in the previous exercise to see this. This links the financial analysis to machine learning metrics we have examined earlier in the case study.
To see the connection of precision and recall with the threshold used to define positive and negative predictions, it can be instructive to plot them separately.
Plot precision and recall separately on the y-axis against threshold on the x-axis.
Use this code to produce the plot:
plt.plot(thresholds, np.divide(n_true_pos, n_pos_pred), label='Precision') plt.plot(thresholds, n_true_pos/sum(y_test_all), label='Recall') plt.xlabel('Threshold') plt.legend()
The plot should appear as follows:
This plot sheds some light on why the optimal threshold turned out to be 0.2. While the optimal threshold also depends on the financial analysis of costs and savings, we can see here that the steepest part of the initial increase in precision, which represents the correctness of positive predictions and is therefore a measure of how cost-effective the model-guided counseling can be, happens up to a threshold of about 0.2.