Post
1369
π One of the biggest changes in Llama 3 was the training dataset, which grew by 7X over Llama 2 (2T to 15T tokens) π
While Meta did not open source the dataset, it sparked a thought... what would happen if everyone had access to a big, high-quality dataset? π€
To address that, in April this year, @huggingface released FineWeb, a 15T token open-source dataset π
And now they are releasing FineWeb Technical Report and FineWeb Edu π
π 15T tokens in FineWeb outperforming other open datasets
π 1.3T highest-quality educational dataset FineWeb-Edu
π 5.4T high-quality educational tokens in FineWeb-Edu-2
FineWeb Edu outperforms other datasets on MMLU, ARC, OpenBookQA π
ODC-By 1.0 license π
Report: HuggingFaceFW/blogpost-fineweb-v1
While Meta did not open source the dataset, it sparked a thought... what would happen if everyone had access to a big, high-quality dataset? π€
To address that, in April this year, @huggingface released FineWeb, a 15T token open-source dataset π
And now they are releasing FineWeb Technical Report and FineWeb Edu π
π 15T tokens in FineWeb outperforming other open datasets
π 1.3T highest-quality educational dataset FineWeb-Edu
π 5.4T high-quality educational tokens in FineWeb-Edu-2
FineWeb Edu outperforms other datasets on MMLU, ARC, OpenBookQA π
ODC-By 1.0 license π
Report: HuggingFaceFW/blogpost-fineweb-v1