wzuidema
commited on
Commit
•
2578029
1
Parent(s):
51dd120
Update notice.md
Browse files
notice.md
CHANGED
@@ -1,4 +1,3 @@
|
|
1 |
-
**Some more details**
|
2 |
* Shown on the left are the results from gradient-weighted attention rollout, as defined by [Hila Chefer](https://github.com/hila-chefer)
|
3 |
[(Transformer-MM_explainability)](https://github.com/hila-chefer/Transformer-MM-Explainability/), with rollout recursion upto selected layer, and split out between contribution towards a predicted positive sentiment and a predicted negative sentiment.
|
4 |
* Layer IG, as implemented in [Captum](https://captum.ai/)(LayerIntegratedGradients), based on gradient w.r.t. selected layer. IG integrates gradients over a path between observed word and a baseline (here we use two popular choices of baseline: the unknown word token, or the padding token).
|
|
|
|
|
1 |
* Shown on the left are the results from gradient-weighted attention rollout, as defined by [Hila Chefer](https://github.com/hila-chefer)
|
2 |
[(Transformer-MM_explainability)](https://github.com/hila-chefer/Transformer-MM-Explainability/), with rollout recursion upto selected layer, and split out between contribution towards a predicted positive sentiment and a predicted negative sentiment.
|
3 |
* Layer IG, as implemented in [Captum](https://captum.ai/)(LayerIntegratedGradients), based on gradient w.r.t. selected layer. IG integrates gradients over a path between observed word and a baseline (here we use two popular choices of baseline: the unknown word token, or the padding token).
|