Understanding ML results
So far, our app might be useful, but often just showing a result is not good enough for a data app. We also should show some explanation as to why they got the result that they did! In order to do this, we can include in the output of the app that we have already made a section that helps users understand the model better.
To start, random forest models already have a built-in feature importance method derived from the set of individual decision trees that make up the random forest. We can edit our penguins_ml.py
file to graph this importance, and then call that image from within our Streamlit app. We could also graph this directly from within our Streamlit app, but it is more efficient to make this graph once in penguins_ml.py
instead of every time our Streamlit app reloads (which is every time a user changes a user input!). The following code edits our penguins_ml.py
file and adds the feature importance graph, saving it to our folder. We also call the...