Understanding ML results
So far, our app might be useful, but often, just showing a result is not good enough for a data app. We should show some explanation of the results. In order to do this, we can include a section in the output of the app that we have already made that helps users in understanding the model better.
To start, random forest models already have a built-in feature importance method derived from the set of individual decision trees that make up the random forest. We can edit our penguins_ml.py
file to graph this importance, and then call that image from within our Streamlit app. We could also graph this directly from within our Streamlit app, but it is more efficient to make this graph once in penguins_ml.py
instead of every time our Streamlit app reloads (which is every time a user changes a user input!). The following code edits our penguins_ml.py
file and adds the feature importance graph, saving it to our folder. We also call the tight_layout()
feature, which...