Other Interpretable AI Tools
There are many other methods and tools to interpret transformer models. We will briefly examine two efficient tools: LIT and OpenAI’s GPT-4 explainer. Let's now begin with the intuitive LIT tool.
LIT
LIT's visual interface will help you find examples that the model processes incorrectly, analyze similar examples, see how the model behaves when you change a context, and more language issues related to transformer models.LIT does not display the activities of the attention heads as BertViz
does. However, it's worth analyzing why things went wrong and trying to find solutions.You can choose a Uniform Manifold Approximation and Projection (UMAP) visualization or a PCA projector representation. PCA will make more linear projections in specific directions and magnitude. UMAP will break its projections down into mini-clusters. Both approaches make sense depending on how far you want to go when analyzing the output...