Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
koalazf99 commited on
Commit
61b8928
1 Parent(s): 2bf5c08

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -17,7 +17,7 @@ size_categories:
17
  <img src="prox-teaser.png">
18
  </p>
19
 
20
- [ArXiv](http://arxiv.org/abs/xxxx) | [Models](https://huggingface.co/collections/gair-prox/prox-general-models-65f1674f0607712c4d6eec76) | [Code](https://github.com/GAIR-NLP/ProX)
21
 
22
  fineweb-pro is refined from [fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)(350BT sample) using the **ProX** refining framework.
23
  It contains about 100B high quality tokens, ready for general language model pre-training.
@@ -28,6 +28,10 @@ fineweb-pro is based on fineweb, which is made available under an ODC-By 1.0 lic
28
 
29
  ### Citation
30
  ```
31
- @misc{TBD
 
 
 
 
32
  }
33
  ```
 
17
  <img src="prox-teaser.png">
18
  </p>
19
 
20
+ [ArXiv](http://arxiv.org/abs/2409.17115) | [Models](https://huggingface.co/collections/gair-prox/prox-general-models-65f1674f0607712c4d6eec76) | [Code](https://github.com/GAIR-NLP/ProX)
21
 
22
  fineweb-pro is refined from [fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)(350BT sample) using the **ProX** refining framework.
23
  It contains about 100B high quality tokens, ready for general language model pre-training.
 
28
 
29
  ### Citation
30
  ```
31
+ @article{zhou2024programming,
32
+ title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
33
+ author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
34
+ journal={arXiv preprint arXiv:2409.17115},
35
+ year={2024}
36
  }
37
  ```