wzuidema
commited on
Commit
•
ce5a659
1
Parent(s):
647ac05
explanations added
Browse files
app.py
CHANGED
@@ -288,11 +288,11 @@ But how does it arrive at its classification? A range of so-called "attribution
|
|
288 |
|
289 |
(Note that in general, importance scores only provide a very limited form of "explanation" and that different attribution methods differ radically in how they assign importance).
|
290 |
|
291 |
-
Two key methods for Transformers are "attention rollout" (Abnar & Zuidema, 2020) and (
|
292 |
|
293 |
-
* Gradient-weighted attention rollout, as defined by [Hila Chefer
|
294 |
-
[Transformer-MM_explainability](https://github.com/hila-chefer/Transformer-MM-Explainability/)
|
295 |
-
* Layer IG, as implemented in [Captum](https://captum.ai/)
|
296 |
""",
|
297 |
examples=[
|
298 |
[
|
|
|
288 |
|
289 |
(Note that in general, importance scores only provide a very limited form of "explanation" and that different attribution methods differ radically in how they assign importance).
|
290 |
|
291 |
+
Two key methods for Transformers are "attention rollout" (Abnar & Zuidema, 2020) and (layer) Integrated Gradient. Here we show:
|
292 |
|
293 |
+
* Gradient-weighted attention rollout, as defined by [Hila Chefer](https://github.com/hila-chefer)
|
294 |
+
[(Transformer-MM_explainability)](https://github.com/hila-chefer/Transformer-MM-Explainability/)
|
295 |
+
* Layer IG, as implemented in [Captum](https://captum.ai/)(LayerIntegratedGradients)
|
296 |
""",
|
297 |
examples=[
|
298 |
[
|