Added Contamination Info on Old Models: GPT3, FLAN, GLaM, PaLM, PaLM 2
What are you reporting:
- Evaluation dataset(s) found in a pre-training corpus. (e.g. COPA found in ThePile)
- Evaluation dataset(s) found in a pre-trained model. (e.g. FLAN T5 has been trained on ANLI)
Evaluation dataset(s):
allenai/openbookqa
Anagrams 1
Anagrams 2
cimec/lambada
csebuetnlp/xlsum
Cycled Letters
EdinburghNLP/xsum
facebook/anli
ibragim-bad/arc_challenge
ibragim-bad/arc_easy
mandarjoshi/trivia_qa
natural_questions
piqa
quac
race
rajpurkar/squad_v2
Reversed Words
rmanluo/RoG-webqsp
Rowan/hellaswag
SAT Analogies
stanfordnlp/coqa
story_cloze
super_glue
Symbol Insertion
ucinlp/drop
wiki_lingua
winograd_wsc
winogrande
wmt/wmt16
Contaminated model(s):
FLAN, GLaM, GPT-3, PaLM, PaLM 2
Briefly describe your method to detect data contamination
- Data-based approach
- Model-based approach
Description of your method, 3-4 sentences. Evidence of data contamination (Read below):
Exact string matching: GPT-3, FLAN and GLaM use 13−gram overlaps, PaLM and PaLM 2 use 15-gram overlaps
Papers found using: https://hitz-zentroa.github.io/lm-contamination/
Citation
Is there a paper that reports the data contamination or describes the method used to detect data contamination?
URL: [https://hitz-zentroa.github.io/lm-contamination/](https://hitz-zentroa.github.io/lm-contamination/)
Citation: ```GPT3:
@article
{brown2020language,
title={Language models are few-shot learners},
author={Brown, Tom and Mann, Benjamin and Ryder, Nick and Subbiah, Melanie and Kaplan, Jared D and Dhariwal, Prafulla and Neelakantan, Arvind and Shyam, Pranav and Sastry, Girish and Askell, Amanda and others},
journal={Advances in neural information processing systems},
volume={33},
pages={1877--1901},
year={2020}
}
FLAN:
@article
{wei2021finetuned,
title={Finetuned language models are zero-shot learners},
author={Wei, Jason and Bosma, Maarten and Zhao, Vincent Y and Guu, Kelvin and Yu, Adams Wei and Lester, Brian and Du, Nan and Dai, Andrew M and Le, Quoc V},
journal={arXiv preprint arXiv:2109.01652},
year={2021}
}
GLaM:
@inproceedings{du2022glam,
title={Glam: Efficient scaling of language models with mixture-of-experts},
author={Du, Nan and Huang, Yanping and Dai, Andrew M and Tong, Simon and Lepikhin, Dmitry and Xu, Yuanzhong and Krikun, Maxim and Zhou, Yanqi and Yu, Adams Wei and Firat, Orhan and others},
booktitle={International Conference on Machine Learning},
pages={5547--5569},
year={2022},
organization={PMLR}
}
PaLM:
@article
{chowdhery2023palm,
title={Palm: Scaling language modeling with pathways},
author={Chowdhery, Aakanksha and Narang, Sharan and Devlin, Jacob and Bosma, Maarten and Mishra, Gaurav and Roberts, Adam and Barham, Paul and Chung, Hyung Won and Sutton, Charles and Gehrmann, Sebastian and others},
journal={Journal of Machine Learning Research},
volume={24},
number={240},
pages={1--113},
year={2023}
}
PaLM 2:
@article
{anil2023palm,
title={Palm 2 technical report},
author={Anil, Rohan and Dai, Andrew M and Firat, Orhan and Johnson, Melvin and Lepikhin, Dmitry and Passos, Alexandre and Shakeri, Siamak and Taropa, Emanuel and Bailey, Paige and Chen, Zhifeng and others},
journal={arXiv preprint arXiv:2305.10403},
year={2023}
}
*Important!* If you wish to be listed as an author in the final report, please complete this information for all the authors of this Pull Request.
- Full name: Ameya Prabhu
- Institution: Tübingen AI Center, University of Tübingen
- Email: ameya@prabhu.be
Hi @AmeyaPrabhu !
Let me know if the PR is ready to merge, I can resolve the conflicts with the main branch if you want.
Best,
Oscar
Hi Oscar,
The PR should be ready to merge. Could you double-check once that nothing in there seems obviously off (since this was a lot of entries)-- I did check but to reconfirm.
I fixed some bugs related to the CSV format (extra ";" in some cases).
Thank you @AmeyaPrabhu for your contribution :)
Best,
Oscar