TomPei's picture
Update README.md
10c7d6a verified
|
raw
history blame
15.9 kB
metadata
language:
  - zh
pipeline_tag: text-generation
license: apache-2.0
task_categories:
  - text-generation
size_categories:
  - 10B<n<100B

Chinese Fineweb Edu Dataset V2 [中文] [English]

OpenCSG

[OpenCSG Community] [github] [wechat] [Twitter]

Chinese Fineweb Edu Dataset V2 is a comprehensive upgrade of the original Chinese Fineweb Edu, designed and optimized for natural language processing (NLP) tasks in the education sector. This high-quality Chinese pretraining dataset has undergone significant improvements and expansions, aimed at providing researchers and developers with more diverse and broadly applicable educational corpus resources. With a dataset size of 188 million entries (approximately 420 billion tokens), Fineweb Edu v2 not only increases the volume but also optimizes the data filtering methods and scoring models to ensure effectiveness and practicality in the educational domain.

Enhanced Scoring Model

In Chinese Fineweb Edu v2, the scoring model used for data filtering has been significantly upgraded to the csg-wukong-enterprise V2 model, which is larger in scale and more powerful than the model used in the previous version. csg-wukong-enterprise V2 boasts more parameters and deeper semantic understanding, especially excelling in the comprehension and processing of Chinese text. This model not only provides a more detailed analysis of the structure and content of the text but also captures deeper semantic and emotional nuances hidden within the language.

This enhancement allows the model to more accurately assess the educational value, writing quality, and real-world application of the text during the filtering process. Particularly for high-demand texts like those in education and technology, the scoring model of Fineweb2 ensures high quality and consistency in the filtering results. This significant improvement boosts the reliability of data filtering, providing stronger support for subsequent model training.

Increased Training Data Size and Content Diversity

The size and diversity of the training data are key factors in influencing the performance of pretrained models. In Chinese Fineweb Edu v2, the scale of the training data has been significantly expanded to 1.88 million high-quality entries. This includes various types of Chinese texts such as books, news, and blogs, and introduces more representative domains, covering topics like education, technology, history, culture, and current affairs.

Moreover, Fineweb2 enhances the model’s cross-linguistic understanding by incorporating 25% English data. This not only increases the dataset's diversity but also equips the model to handle not just Chinese content but also cross-linguistic tasks. This establishes a strong foundation for future NLP tasks in mixed Chinese and English contexts and provides extensive training resources to enhance the model's multilingual capabilities.

By introducing a variety of data types and languages, Fineweb2 not only improves the model's performance in Chinese settings but also expands its potential for global applications, showcasing its powerful capabilities in multilingual tasks.

Prompt Improvements

During the construction of the Fineweb2 dataset, the data filtering process was particularly crucial. To ensure that only text with real educational value and practicality was selected, we carefully optimized the design of the prompts used for data filtering. The new prompts more accurately evaluate the educational value, writing quality, and practicality of web content, refining the filtering process for better precision.

The new prompts clearly define scoring standards for educational content and also set expectations for writing style, coherence, and thematic depth. The specific scoring criteria are as follows:

Below is an excerpt from a web page. Please use the following 5-point rating system to assess the writing quality, educational value, and practicality of the webpage:

以下是一段网页内容摘录。请使用以下5分制评分系统来评估该网页的写作水平、教育价值和实用性:
0分:如果网页没有提供任何教育价值,完全由无关信息(如广告、宣传材料、少儿不宜内容)组成。
1分:如果网页提供了一些可能有教育价值的基本信息,但包含较多的无关或非学术内容(如广告和宣传材料)。
2分:如果网页涉及某些与教育相关的元素,但与教育标准不太吻合。它可能将教育内容与非教育材料混杂,对潜在的有用的主题进行浅显概述,或以不连贯的写作风格呈现信息。
3分:如果网页适合教育使用,并介绍了与某些学校课程中可能学到的关键概念,或对个人发展有用的实用信息。它的内容连贯但可能不全面,或包含一些无关信息。它可能类似于教科书的一小段节选,可以学习但有明显局限,如涉及过于复杂的概念、过于具体的不重要事件。
4分:如果网页与教育高度相关,对个人学习发展有益,表现出清晰一致的写作风格。它可能类似于教科书的一个章节或教程,提供大量教育内容,极少包含无关信息,且概念对学生来说不会过于深奥。内容连贯、重点突出,对结构化学习有价值。
5分:如果网页摘录在教育价值上表现极好,完全适合小学、中学或大学教学或专业人士学习。它遵循详细的推理过程,写作风格易于理解,对主题提供深刻而全面的见解,不包含任何非教育性或无实用意义内容。

网页内容摘录:
{}

在审查这段网页摘录后:请简要地为您的评分进行合理的解释,最多不超过100字,最后以“教育得分:<分数>”的格式结束。请根据所列出的标准系统地赋予分数。

After reviewing this webpage excerpt, briefly explain the reasoning behind your score in no more than 100 words, ending with the format: "Educational Score: ." Please assign the score systematically based on the listed criteria.

After merging all data, the sample score distribution was as follows: texts with scores of 3 and above were selected, totaling 188 million entries (about 420 billion tokens). These data, which are not only extensive but also carefully filtered and deduplicated, ensure the high quality and uniqueness of the dataset. These scored data will be used to train large-scale language models within the Fineweb2 dataset, helping them achieve superior performance in various tasks.

experiment

Expanded Data Sources

The range of data sources for the Fineweb2 dataset has been further extended. Compared to the original Fineweb, Fineweb2 introduces massive datasets from various fields and sources, including Industry2, CCI3, michao, wanjuan1.0, wudao, and ChineseWebText. These datasets cover a broader range of industries and domains, enhancing the diversity and applicability of the dataset.

experiment

In conclusion, the Fineweb2 dataset not only surpasses its predecessor in scale but also significantly improves the quality of data, content diversity, and precision of filtering. This lays a solid foundation for the further development of Chinese NLP applications and provides researchers with richer resources to explore and optimize various model training methods.

We warmly invite developers and researchers interested in this field to follow and engage with the community, working together to advance the technology. Stay tuned for the open-source release of the dataset!

License Agreement

Usage of the Chinese Fineweb Edu dataset requires adherence to the OpenCSG Community License. The Chinese Fineweb Edu dataset supports commercial use. If you plan to use the OpenCSG model or its derivatives for commercial purposes, you must comply with the terms and conditions outlined in the OpenCSG Community License as well as the Apache 2.0 License. For commercial use, please send an email to lorraineg@opencsg.com and obtain permission.

Chinese Fineweb Edu V2数据集介绍

OpenCSG

[OpenCSG 社区] [github] [微信] [推特]

Chinese Fineweb Edu v2 是Chinese Fineweb Edu的全新升级版,专为教育领域的自然语言处理(NLP)任务设计和优化的高质量中文预训练数据集。该数据集在前一版本的基础上进行了大规模的改进和扩展,致力于为研究人员和开发者提供更加多样化、广泛适用的教育类语料资源。Fineweb Edu v2 不仅数据量达到**188M条数据**,约**420B tokens**,还优化了数据的筛选方式和打分模型,以确保其在教育领域的有效性和实用性。

更强的打分模型

在Chinese Fineweb edu v2版本中,数据筛选的打分模型进行了重大升级,采用了规模更大、性能更强的csg-wukong-enterprise V2模型。相较于上一版本的打分模型,csg-wukong-enterprise V2拥有更大的参数量和更深层次的语义理解能力,特别是在中文文本理解和处理方面表现出色。该模型不仅能对文本的结构、内容进行更细致的分析,还能有效捕捉隐藏在语言中的深层次语义和情感信息。

这种提升意味着在数据筛选过程中,模型能够更加精准地评估文本的教育价值、写作质量以及其对实际应用的价值。尤其是在处理教育类、技术类等高要求的文本时,Fineweb2的打分模型确保了筛选结果的高质量和高一致性。这一进步显著提高了数据筛选的可靠性,为后续的模型训练提供了更有力的保障。

训练数据规模和内容多样性提升

数据集的规模和内容的多样性是影响预训练模型表现的关键因素之一。在Chinese Fineweb edu v2中,训练数据的规模从之前的版本显著扩展到了188万条高质量数据。这不仅包括了书籍、新闻、博客等多种类型的中文文本,还引入了更多具有代表性的领域,涵盖了教育、科技、历史、文化、时事等多种主题。

更值得一提的是,Fineweb2为了提升模型的跨语言理解能力,包含了25%的英文数据。这不仅增强了数据集的多样性,使得模型不仅能处理中文内容,还具备了跨语言的适应性。这为未来中文和英文混合场景中的自然语言处理任务打下了坚实的基础,也为模型的多语言处理能力提供了广泛的训练资源。

通过引入多元化的数据类型和语言,Fineweb2不仅提升了模型在中文环境中的表现,还拓展了它在全球应用中的潜力,使其在多语言任务中展现出更为强大的能力。

Prompt改进

在Fineweb2数据集的构建过程中,数据筛选环节尤为重要。为确保筛选出真正具有教育价值和实用性的文本,我们对数据筛选的Prompt设计进行了细致的优化。新的Prompt能够更加准确地评估网页内容的教育价值、写作水平和实用性,从而使筛选过程更加细化和精确。

新的Prompt不仅明确了对教育内容的评分标准,还对文本的写作风格、连贯性以及主题深度提出了要求。具体评分标准如下:

以下是一段网页内容摘录。请使用以下5分制评分系统来评估该网页的写作水平、教育价值和实用性:
0分:如果网页没有提供任何教育价值,完全由无关信息(如广告、宣传材料、少儿不宜内容)组成。
1分:如果网页提供了一些可能有教育价值的基本信息,但包含较多的无关或非学术内容(如广告和宣传材料)。
2分:如果网页涉及某些与教育相关的元素,但与教育标准不太吻合。它可能将教育内容与非教育材料混杂,对潜在的有用的主题进行浅显概述,或以不连贯的写作风格呈现信息。
3分:如果网页适合教育使用,并介绍了与某些学校课程中可能学到的关键概念,或对个人发展有用的实用信息。它的内容连贯但可能不全面,或包含一些无关信息。它可能类似于教科书的一小段节选,可以学习但有明显局限,如涉及过于复杂的概念、过于具体的不重要事件。
4分:如果网页与教育高度相关,对个人学习发展有益,表现出清晰一致的写作风格。它可能类似于教科书的一个章节或教程,提供大量教育内容,极少包含无关信息,且概念对学生来说不会过于深奥。内容连贯、重点突出,对结构化学习有价值。
5分:如果网页摘录在教育价值上表现极好,完全适合小学、中学或大学教学或专业人士学习。它遵循详细的推理过程,写作风格易于理解,对主题提供深刻而全面的见解,不包含任何非教育性或无实用意义内容。

网页内容摘录:
{}

在审查这段网页摘录后:请简要地为您的评分进行合理的解释,最多不超过100字,最后以“教育得分:<分数>”的格式结束。请根据所列出的标准系统地赋予分数。

所有数据集合并后,样本的得分分布如下,通过csg-wukong-enterprise V2模型对这些数据进行评分后,最终选取了3分以上的文本,总计达到188M条数据,约420B tokens。这些数据不仅数量庞大,且经过了严格的筛选和去重处理,确保了数据集的高质量和高独特性。这些经过打分的数据将在Fineweb2的数据集中用于训练大规模语言模型,帮助其在各类任务中实现更高的性能表现。

experiment

数据筛选范围扩大

Fineweb2数据集的数据来源进一步扩展。相较于初代Fineweb,Fineweb2引入了来自多个不同领域和来源的海量数据,新增了Industry2、CCI3、michao、wanjuan1.0、wudao和ChineseWebText等高质量数据集。这些数据集覆盖了更广泛的行业和领域,增加了数据集的多样性和广泛适用性。

experiment

最终,Fineweb2的数据集不仅在规模上远超前作,还在数据的质量、内容的多样性、筛选的精确度等方面有了显著提升。这为未来中文NLP应用的进一步发展打下了坚实的基础,同时也为研究人员提供了更加丰富的资源去探索和优化各种模型训练方法。

我们诚邀对这一领域感兴趣的开发者和研究者关注和联系社区,共同推动技术的进步。敬请期待数据集的开源发布!

许可协议

使用 Chinese Fineweb Edu V2数据集需要遵循 OpenCSG 社区许可证。Chinese Fineweb Edu V2数据集支持商业用途。如果您计划将 OpenCSG 模型或其衍生产品用于商业目的,您必须遵守 OpenCSG 社区许可证以及 Apache 2.0 许可证中的条款和条件。如用于商业用途,需发送邮件至 lorraineg@opencsg.com,并获得许可。