wzuidema commited on
Commit
b0bf43a
1 Parent(s): d228cdf

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -292,7 +292,7 @@ they provide a very limited form of "explanation" -- and often disagree -- but s
292
  Two key attribution methods for Transformers are "Attention Rollout" (Abnar & Zuidema, 2020) and (layer) Integrated Gradient. Here we show:
293
 
294
  * Gradient-weighted attention rollout, as defined by [Hila Chefer](https://github.com/hila-chefer)
295
- [(Transformer-MM_explainability)](https://github.com/hila-chefer/Transformer-MM-Explainability/), without rollout recursion upto selected layer
296
  * Layer IG, as implemented in [Captum](https://captum.ai/)(LayerIntegratedGradients), based on gradient w.r.t. selected layer.
297
  """,
298
  examples=[
 
292
  Two key attribution methods for Transformers are "Attention Rollout" (Abnar & Zuidema, 2020) and (layer) Integrated Gradient. Here we show:
293
 
294
  * Gradient-weighted attention rollout, as defined by [Hila Chefer](https://github.com/hila-chefer)
295
+ [(Transformer-MM_explainability)](https://github.com/hila-chefer/Transformer-MM-Explainability/), with rollout recursion upto selected layer
296
  * Layer IG, as implemented in [Captum](https://captum.ai/)(LayerIntegratedGradients), based on gradient w.r.t. selected layer.
297
  """,
298
  examples=[