clem HF staff commited on
Commit
b3c8a97
1 Parent(s): c6d4235

fix model card

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -41,11 +41,11 @@ Which of the following best characterizes binne bams?\n
41
  - Sentence 4: Binne bams are places where people live."
42
  ---
43
 
44
- **How do I pronounce the name of the model?** T0 should be pronounced "T Zero" and any "p" stands for "Plus", so "T0pp" should be pronounced "T Zero Plus Plus"!
45
 
46
  # Model Description
47
 
48
- T0*, or "T5 for zero-shot", shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.
49
 
50
  # Intended uses
51
 
 
41
  - Sentence 4: Binne bams are places where people live."
42
  ---
43
 
44
+ **How do I pronounce the name of the model?** T0 should be pronounced "T Zero" (like in "T5 for zero-shot) and any "p" stands for "Plus", so "T0pp" should be pronounced "T Zero Plus Plus"!
45
 
46
  # Model Description
47
 
48
+ T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.
49
 
50
  # Intended uses
51