wzuidema commited on
Commit
ce5a659
1 Parent(s): 647ac05

explanations added

Browse files
Files changed (1) hide show
  1. app.py +4 -4
app.py CHANGED
@@ -288,11 +288,11 @@ But how does it arrive at its classification? A range of so-called "attribution
288
 
289
  (Note that in general, importance scores only provide a very limited form of "explanation" and that different attribution methods differ radically in how they assign importance).
290
 
291
- Two key methods for Transformers are "attention rollout" (Abnar & Zuidema, 2020) and (layered) Integrated Gradient. Here we show:
292
 
293
- * Gradient-weighted attention rollout, as defined by [Hila Chefer's](https://github.com/hila-chefer)
294
- [Transformer-MM_explainability](https://github.com/hila-chefer/Transformer-MM-Explainability/))
295
- * Layer IG, as implemented in [Captum](https://captum.ai/)'s LayerIntegratedGradients
296
  """,
297
  examples=[
298
  [
 
288
 
289
  (Note that in general, importance scores only provide a very limited form of "explanation" and that different attribution methods differ radically in how they assign importance).
290
 
291
+ Two key methods for Transformers are "attention rollout" (Abnar & Zuidema, 2020) and (layer) Integrated Gradient. Here we show:
292
 
293
+ * Gradient-weighted attention rollout, as defined by [Hila Chefer](https://github.com/hila-chefer)
294
+ [(Transformer-MM_explainability)](https://github.com/hila-chefer/Transformer-MM-Explainability/)
295
+ * Layer IG, as implemented in [Captum](https://captum.ai/)(LayerIntegratedGradients)
296
  """,
297
  examples=[
298
  [