Interpretability for sequence generation models
-
Updated
Dec 13, 2022 - Python
Interpretability for sequence generation models
Code for our paper, Neural Network Attributions: A Causal Perspective (ICML 2019).
On Explaining Your Explanations of BERT: An Empirical Study with Sequence Classification
Attribution (or visual explanation) methods for understanding video classification networks. Demo codes for WACV2021 paper: Towards Visually Explaining Video Understanding Networks with Perturbation.
Code for the paper: Towards Better Understanding Attribution Methods. CVPR 2022.
Metrics for evaluating interpretability methods.
Code for our NN Attribution work accepted at AISTATS '22
The source code for the journal paper: Spatio-Temporal Perturbations for Video Attribution, TCSVT-2021
Add a description, image, and links to the attribution-methods topic page so that developers can more easily learn about it.
To associate your repository with the attribution-methods topic, visit your repo's landing page and select "manage topics."