jbochi commited on
Commit
08327a2
1 Parent(s): 83327bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -4
README.md CHANGED
@@ -443,9 +443,6 @@ Abstract:
443
 
444
  > We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.
445
 
446
-
447
- The 3B model uses 1 as the decoder start token, 7b
448
-
449
  ```python
450
  from transformers import T5ForConditionalGeneration, T5Tokenizer, GenerationConfig
451
 
@@ -462,7 +459,7 @@ outputs = model.generate(
462
  ))
463
 
464
  tokenizer.decode(outputs[0], skip_special_tokens=True)
465
- # Amo la pizza!
466
  ```
467
 
468
  Colab to generate these files is [here](https://colab.research.google.com/drive/1rZ2NRyl2zwmg0sQ2Wi-uZZF48iVYulTC#scrollTo=pVODoE6gA9sw).
 
443
 
444
  > We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.
445
 
 
 
 
446
  ```python
447
  from transformers import T5ForConditionalGeneration, T5Tokenizer, GenerationConfig
448
 
 
459
  ))
460
 
461
  tokenizer.decode(outputs[0], skip_special_tokens=True)
462
+ # Eu amo pizza!
463
  ```
464
 
465
  Colab to generate these files is [here](https://colab.research.google.com/drive/1rZ2NRyl2zwmg0sQ2Wi-uZZF48iVYulTC#scrollTo=pVODoE6gA9sw).