Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,16 @@ thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
|
|
8 |
license: mit
|
9 |
pipeline_tag: zero-shot-classification
|
10 |
---
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the GLUE MNLI dataset.
|
13 |
It achieves the following results on the evaluation set:
|
14 |
- Loss: 0.4103
|
|
|
8 |
license: mit
|
9 |
pipeline_tag: zero-shot-classification
|
10 |
---
|
11 |
+
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
|
12 |
+
#### Notes.
|
13 |
+
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
|
14 |
+
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp**
|
15 |
+
|
16 |
+
```bash
|
17 |
+
cd transformers/examples/text-classification/
|
18 |
+
export TASK_NAME=mrpc
|
19 |
+
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\\n--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\\n--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
|
20 |
+
```
|
21 |
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the GLUE MNLI dataset.
|
22 |
It achieves the following results on the evaluation set:
|
23 |
- Loss: 0.4103
|