_id
stringlengths 24
24
| id
stringlengths 5
121
| author
stringlengths 2
42
| cardData
stringlengths 2
958k
β | disabled
bool 2
classes | gated
stringclasses 3
values | lastModified
stringlengths 24
24
| likes
int64 0
6.17k
| trendingScore
float64 0
97
| private
bool 1
class | sha
stringlengths 40
40
| description
stringlengths 0
6.67k
β | downloads
int64 0
2.42M
| tags
sequencelengths 1
7.92k
| createdAt
stringlengths 24
24
| key
stringclasses 1
value | citation
stringlengths 0
10.7k
β | paperswithcode_id
stringclasses 638
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
63990f21cc50af73d29ecfa3 | fka/awesome-chatgpt-prompts | fka | {"license": "cc0-1.0", "tags": ["ChatGPT"], "task_categories": ["question-answering"], "size_categories": ["100K<n<1M"]} | false | False | 2024-09-03T21:28:41.000Z | 6,173 | 97 | false | 459a66186f8f83020117b8acc5ff5af69fc95b45 | π§ Awesome ChatGPT Prompts [CSV dataset]
This is a Dataset Repository of Awesome ChatGPT Prompts
View All Prompts on GitHub
License
CC-0
| 9,085 | [
"task_categories:question-answering",
"license:cc0-1.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"ChatGPT"
] | 2022-12-13T23:47:45.000Z | null | null |
|
67181a27dfa0b095f0902d33 | qq8933/OpenLongCoT-Pretrain | qq8933 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 269352240, "num_examples": 102906}], "download_size": 64709509, "dataset_size": 269352240}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | False | 2024-10-28T13:50:37.000Z | 46 | 44 | false | 40562378be9f86728440a0fb44f07ba2bdc03646 | Please cite me if this dataset is helpful for you!π₯°
@article{zhang2024llama,
title={LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning},
author={Zhang, Di and Wu, Jianbo and Lei, Jingdi and Che, Tong and Li, Jiatong and Xie, Tong and Huang, Xiaoshui and Zhang, Shufei and Pavone, Marco and Li, Yuqiang and others},
journal={arXiv preprint arXiv:2410.02884},
year={2024}
}
@article{zhang2024accessing,
title={Accessing GPT-4 level Mathematical Olympiad⦠See the full description on the dataset page: https://huggingface.co/datasets/qq8933/OpenLongCoT-Pretrain. | 224 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2410.02884",
"arxiv:2406.07394",
"region:us"
] | 2024-10-22T21:33:27.000Z | null | null |
|
66f5a5d9763d438dab13f188 | Spawning/PD12M | Spawning | {"language": ["en"], "pretty_name": "PD12M", "license": "cdla-permissive-2.0", "tags": ["image"]} | false | False | 2024-10-31T15:25:49.000Z | 95 | 43 | false | 4fd5d707a72aad71bd88c7e7bc5df2ae5e0d6c53 |
PD12M
Summary
At 12.4 million image-caption pairs, PD12M is the largest public domain image-text dataset to date, with sufficient size to train foundation models while minimizing copyright concerns. Through the Source.Plus platform, we also introduce novel, community-driven dataset governance mechanisms that reduce harm and support reproducibility over time.
Jordan Meyer Nicholas Padgett Cullen Miller Laura Exline
Paper Datasheet Project⦠See the full description on the dataset page: https://huggingface.co/datasets/Spawning/PD12M. | 6,848 | [
"language:en",
"license:cdla-permissive-2.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2410.23144",
"region:us",
"image"
] | 2024-09-26T18:20:09.000Z | null | null |
|
670d0cb9d905bbbc78d7a18a | neuralwork/arxiver | neuralwork | {"license": "cc-by-nc-sa-4.0", "size_categories": ["10K<n<100K"]} | false | False | 2024-11-01T21:18:04.000Z | 337 | 26 | false | 698a6662e77fd5dd45dbbec988abc8123e5fa086 |
Arxiver Dataset
Arxiver consists of 63,357 arXiv papers converted to multi-markdown (.mmd) format. Our dataset includes original arXiv article IDs, titles, abstracts, authors, publication dates, URLs and corresponding markdown files published between January 2023 and October 2023.
We hope our dataset will be useful for various applications such as semantic search, domain specific language modeling, question answering and summarization.
Curation
The Arxiver dataset⦠See the full description on the dataset page: https://huggingface.co/datasets/neuralwork/arxiver. | 4,252 | [
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-10-14T12:21:13.000Z | null | null |
|
67214aee41fba8f8b985b247 | wyu1/Leopard-Instruct | wyu1 | {"configs": [{"config_name": "arxiv", "data_files": [{"split": "train", "path": "arxiv/*"}]}, {"config_name": "chartgemma", "data_files": [{"split": "train", "path": "chartgemma/*"}]}, {"config_name": "chartqa", "data_files": [{"split": "train", "path": "chartqa/*"}]}, {"config_name": "dude", "data_files": [{"split": "train", "path": "dude/*"}]}, {"config_name": "dvqa", "data_files": [{"split": "train", "path": "dvqa/*"}]}, {"config_name": "figureqa", "data_files": [{"split": "train", "path": "figureqa/*"}]}, {"config_name": "iconqa", "data_files": [{"split": "train", "path": "iconqa/*"}]}, {"config_name": "infographics", "data_files": [{"split": "train", "path": "infographics/*"}]}, {"config_name": "llavar", "data_files": [{"split": "train", "path": "llavar/*"}]}, {"config_name": "mapqa", "data_files": [{"split": "train", "path": "mapqa/*"}]}, {"config_name": "mathv360k", "data_files": [{"split": "train", "path": "mathv360k/*"}]}, {"config_name": "mind2web", "data_files": [{"split": "train", "path": "mind2web/*"}]}, {"config_name": "monkey", "data_files": [{"split": "train", "path": "monkey/*"}]}, {"config_name": "mpdocvqa", "data_files": [{"split": "train", "path": "mpdocvqa/*"}]}, {"config_name": "mplugdocreason", "data_files": [{"split": "train", "path": "mplugdocreason/*"}]}, {"config_name": "multichartqa", "data_files": [{"split": "train", "path": "multi_chartqa/*"}]}, {"config_name": "multihiertt", "data_files": [{"split": "train", "path": "multihiertt/*"}]}, {"config_name": "multitab", "data_files": [{"split": "train", "path": "multitab/*"}]}, {"config_name": "omniact", "data_files": [{"split": "train", "path": "omniact/*"}]}, {"config_name": "pew_chart", "data_files": [{"split": "train", "path": "pew_chart/*"}]}, {"config_name": "rico", "data_files": [{"split": "train", "path": "rico/*"}]}, {"config_name": "slidesgeneration", "data_files": [{"split": "train", "path": "slidesgeneration/*"}]}, {"config_name": "slideshare", "data_files": [{"split": "train", "path": "slideshare/*"}]}, {"config_name": "slidevqa", "data_files": [{"split": "train", "path": "slidevqa/*"}]}, {"config_name": "docvqa", "data_files": [{"split": "train", "path": "spdocvqa/*"}]}, {"config_name": "tab_entity", "data_files": [{"split": "train", "path": "tab_entity/*"}]}, {"config_name": "tabmwp", "data_files": [{"split": "train", "path": "tabmwp/*"}]}, {"config_name": "tat_dqa", "data_files": [{"split": "train", "path": "tat_dqa/*"}]}, {"config_name": "website_screenshots", "data_files": [{"split": "train", "path": "website_screenshots/*"}]}, {"config_name": "webui", "data_files": [{"split": "train", "path": "webui/*"}]}, {"config_name": "webvision", "data_files": [{"split": "train", "path": "webvision/*"}]}], "license": "apache-2.0", "language": ["en"], "tags": ["multimodal", "instruction-following", "multi-image", "lmm", "vlm", "mllm"], "size_categories": ["100K<n<1M"]} | false | False | 2024-11-08T00:12:25.000Z | 31 | 26 | false | 93317b272c5a9d9c0417fa6ea6e2be89ac9215ea |
Leopard-Instruct
Paper | Github | Models-LLaVA | Models-Idefics2
Summaries
Leopard-Instruct is a large instruction-tuning dataset, comprising 925K instances, with 739K specifically designed for text-rich, multiimage scenarios. It's been used to train Leopard-LLaVA [checkpoint] and Leopard-Idefics2 [checkpoint].
Loading dataset
to load the dataset without automatically downloading and process the images (Please run the following codes with⦠See the full description on the dataset page: https://huggingface.co/datasets/wyu1/Leopard-Instruct. | 32,979 | [
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2410.01744",
"region:us",
"multimodal",
"instruction-following",
"multi-image",
"lmm",
"vlm",
"mllm"
] | 2024-10-29T20:51:58.000Z | null | null |
|
67261c706b966e02542c1743 | beomi/KoAlpaca-RealQA | beomi | {"dataset_info": {"features": [{"name": "custom_id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26211669, "num_examples": 18524}], "download_size": 13989391, "dataset_size": 26211669}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "cc-by-sa-4.0"} | false | auto | 2024-11-03T07:00:13.000Z | 22 | 22 | false | a7df38a0b2cc187b72b40330af81e7b9f28dd95b |
KoAlpaca-RealQA: A Korean Instruction Dataset Reflecting Real User Scenarios
Dataset Summary
The KoAlpaca-RealQA dataset is a unique Korean instruction dataset designed to closely reflect real user interactions in the Korean language. Unlike conventional Korean instruction datasets that rely heavily on translated prompts, this dataset is composed of authentic Korean instructions derived from real-world use cases. Specifically, the dataset has been curated from⦠See the full description on the dataset page: https://huggingface.co/datasets/beomi/KoAlpaca-RealQA. | 162 | [
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-11-02T12:34:56.000Z | null | null |
|
670e1f14c308791317666994 | BAAI/Infinity-MM | BAAI | {"license": "cc-by-sa-4.0", "configs": [{"config_name": "stage1", "data_files": [{"split": "train", "path": "stage1/*/*"}]}, {"config_name": "stage2", "data_files": [{"split": "train", "path": "stage2/*/*/*"}]}, {"config_name": "stage3", "data_files": [{"split": "train", "path": "stage3/*/*"}]}, {"config_name": "stage4", "data_files": [{"split": "train", "path": "stage4/*/*/*"}]}], "language": ["en", "zh"], "size_categories": ["10M<n<100M"], "task_categories": ["image-to-text"], "extra_gated_prompt": "You agree to not use the dataset to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Company/Organization": "text", "Country": "country"}} | false | auto | 2024-11-05T06:57:13.000Z | 59 | 21 | false | 79e444ad1cf4744630e75964b277944bbc44f837 |
Introduction
Beijing Academy of Artificial Intelligence (BAAI)
We collect, organize and open-source the large-scale multimodal instruction dataset, Infinity-MM, consisting of tens of millions of samples. Through quality filtering and deduplication, the dataset has high quality and diversity.
We propose a synthetic data generation method based on open-source models and labeling system, using detailed image annotations and diverse question generation.
News⦠See the full description on the dataset page: https://huggingface.co/datasets/BAAI/Infinity-MM. | 42,171 | [
"task_categories:image-to-text",
"language:en",
"language:zh",
"license:cc-by-sa-4.0",
"size_categories:100M<n<1B",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2410.18558",
"region:us"
] | 2024-10-15T07:51:48.000Z | null | null |
|
66c84764a47b2d6c582bbb02 | amphion/Emilia-Dataset | amphion | {"license": "cc-by-nc-4.0", "task_categories": ["text-to-speech", "automatic-speech-recognition"], "language": ["zh", "en", "ja", "fr", "de", "ko"], "pretty_name": "Emilia", "size_categories": ["10M<n<100M"], "extra_gated_prompt": "Terms of Access: The researcher has requested permission to use the Emilia dataset and the Emilia-Pipe preprocessing pipeline. In exchange for such permission, the researcher hereby agrees to the following terms and conditions:\n1. The researcher shall use the dataset ONLY for non-commercial research and educational purposes.\n2. The authors make no representations or warranties regarding the dataset, \n including but not limited to warranties of non-infringement or fitness for a particular purpose.\n\n3. The researcher accepts full responsibility for their use of the dataset and shall defend and indemnify the authors of Emilia, \n including their employees, trustees, officers, and agents, against any and all claims arising from the researcher's use of the dataset, \n including but not limited to the researcher's use of any copies of copyrighted content that they may create from the dataset.\n\n4. The researcher may provide research associates and colleagues with access to the dataset,\n provided that they first agree to be bound by these terms and conditions.\n \n5. The authors reserve the right to terminate the researcher's access to the dataset at any time.\n6. If the researcher is employed by a for-profit, commercial entity, the researcher's employer shall also be bound by these terms and conditions, and the researcher hereby represents that they are fully authorized to enter into this agreement on behalf of such employer.", "extra_gated_fields": {"Name": "text", "Email": "text", "Affiliation": "text", "Position": "text", "Your Supervisor/manager/director": "text", "I agree to the Terms of Access": "checkbox"}} | false | auto | 2024-09-06T13:29:55.000Z | 147 | 20 | false | bcaad00d13e7c101485990a46e88f5884ffed3fc |
Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech Generation
This is the official repository π for the Emilia dataset and the source code for the Emilia-Pipe speech data preprocessing pipeline.
News π₯
2024/08/28: Welcome to join Amphion's Discord channel to stay connected and engage with our community!
2024/08/27: The Emilia dataset is now publicly available! Discover the most extensive and diverse speech generation⦠See the full description on the dataset page: https://huggingface.co/datasets/amphion/Emilia-Dataset. | 52,408 | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"language:zh",
"language:en",
"language:ja",
"language:fr",
"language:de",
"language:ko",
"license:cc-by-nc-4.0",
"size_categories:10M<n<100M",
"format:webdataset",
"modality:audio",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2407.05361",
"region:us"
] | 2024-08-23T08:25:08.000Z | null | null |
|
670f08ae2e97b2afe4d2df9b | GAIR/o1-journey | GAIR | {"language": ["en"], "size_categories": ["n<1K"]} | false | False | 2024-10-16T00:42:02.000Z | 65 | 19 | false | 32deef4773fe1f9488ff2052daf64035c034c0ea | Dataset for O1 Replication Journey: A Strategic Progress Report
Usage
from datasets import load_dataset
dataset = load_dataset("GAIR/o1-journey", split="train")
Citation
If you find our dataset useful, please cite:
@misc{o1journey,
author = {Yiwei Qin and Xuefeng Li and Haoyang Zou and Yixiu Liu and Shijie Xia and Zhen Huang and Yixin Ye and Weizhe Yuan and Zhengzhong Liu and Yuanzhi Li and Pengfei Liu},
title = {O1 Replication Journey: A Strategic Progress⦠See the full description on the dataset page: https://huggingface.co/datasets/GAIR/o1-journey. | 825 | [
"language:en",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-10-16T00:28:30.000Z | null | null |
|
66fc03bc2d7c7dffd1d95786 | argilla/Synth-APIGen-v0.1 | argilla | {"dataset_info": {"features": [{"name": "func_name", "dtype": "string"}, {"name": "func_desc", "dtype": "string"}, {"name": "tools", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "answers", "dtype": "string"}, {"name": "model_name", "dtype": "string"}, {"name": "hash_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 77390022, "num_examples": 49402}], "download_size": 29656761, "dataset_size": 77390022}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "apache-2.0", "task_categories": ["text-generation"], "language": ["en"], "tags": ["synthetic", "distilabel", "function-calling"], "size_categories": ["10K<n<100K"]} | false | False | 2024-10-10T11:52:03.000Z | 35 | 18 | false | 20107f6709aabd18c7f7b4afc96fe7bfe848b5bb |
Dataset card for Synth-APIGen-v0.1
This dataset has been created with distilabel.
Pipeline script: pipeline_apigen_train.py.
Dataset creation
It has been created with distilabel==1.4.0 version.
This dataset is an implementation of APIGen: Automated Pipeline for Generating Verifiable and Diverse Function-Calling Datasets in distilabel,
generated from synthetic functions. The process can be summarized as follows:
Generate (or in this case modify)β¦ See the full description on the dataset page: https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1. | 260 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"arxiv:2406.18518",
"region:us",
"synthetic",
"distilabel",
"function-calling"
] | 2024-10-01T14:14:20.000Z | null | null |
|
649f37af37bfb5202beabdf4 | allenai/dolma | allenai | {"license": "odc-by", "viewer": false, "task_categories": ["text-generation"], "language": ["en"], "tags": ["language-modeling", "casual-lm", "llm"], "pretty_name": "Dolma", "size_categories": ["n>1T"]} | false | False | 2024-04-17T02:57:00.000Z | 838 | 15 | false | 7f48140530a023e9ea4c5cfb141160922727d4d3 | Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research | 938 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:n>1T",
"arxiv:2402.00159",
"arxiv:2301.13688",
"region:us",
"language-modeling",
"casual-lm",
"llm"
] | 2023-06-30T20:14:39.000Z | @article{dolma,
title = {{Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research}},
author = {
Luca Soldaini and Rodney Kinney and Akshita Bhagia and Dustin Schwenk and David Atkinson and
Russell Authur and Ben Bogin and Khyathi Chandu and Jennifer Dumas and Yanai Elazar and
Valentin Hofmann and Ananya Harsh Jha and Sachin Kumar and Li Lucy and Xinxi Lyu and Ian Magnusson and
Jacob Morrison and Niklas Muennighoff and Aakanksha Naik and Crystal Nam and Matthew E. Peters and
Abhilasha Ravichander and Kyle Richardson and Zejiang Shen and Emma Strubell and Nishant Subramani and
Oyvind Tafjord and Evan Pete Walsh and Hannaneh Hajishirzi and Noah A. Smith and Luke Zettlemoyer and
Iz Beltagy and Dirk Groeneveld and Jesse Dodge and Kyle Lo
},
year = {2024},
journal={arXiv preprint},
} | null |
|
656d9c2bc497edf0a7be5959 | tomytjandra/h-and-m-fashion-caption | tomytjandra | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 7843224039.084, "num_examples": 20491}], "download_size": 6302088359, "dataset_size": 7843224039.084}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | False | 2023-12-04T11:07:53.000Z | 13 | 12 | false | 2083a7e30878af2993632b2fc3565ed4a2159534 |
Dataset Card for "h-and-m-fashion-caption"
More Information needed
| 108 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2023-12-04T09:30:19.000Z | null | null |
|
6644c76014331c74667fb214 | TIGER-Lab/WebInstructSub | TIGER-Lab | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["question-answering"], "pretty_name": "WebInstruct", "dataset_info": {"features": [{"name": "orig_question", "dtype": "string"}, {"name": "orig_answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "index", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6215888891, "num_examples": 2335220}], "download_size": 3509803840, "dataset_size": 6215888891}, "tags": ["language model"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | False | 2024-10-27T03:19:23.000Z | 132 | 12 | false | 559b33b6bcd34da3da047bb235532941026955a4 |
𦣠MAmmoTH2: Scaling Instructions from the Web
Project Page: https://tiger-ai-lab.github.io/MAmmoTH2/
Paper: https://arxiv.org/pdf/2405.03548
Code: https://github.com/TIGER-AI-Lab/MAmmoTH2
WebInstruct (Subset)
This repo contains the partial dataset used in "MAmmoTH2: Scaling Instructions from the Web". This partial data is coming mostly from the forums like stackexchange. This subset contains very high-quality data to boost LLM performance through instruction⦠See the full description on the dataset page: https://huggingface.co/datasets/TIGER-Lab/WebInstructSub. | 608 | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2405.03548",
"region:us",
"language model"
] | 2024-05-15T14:32:00.000Z | null | null |
|
66f830e08d215c6331bec22a | nvidia/OpenMathInstruct-2 | nvidia | {"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10M<n<100M"], "task_categories": ["question-answering", "text-generation"], "pretty_name": "OpenMathInstruct-2", "dataset_info": {"features": [{"name": "problem", "dtype": "string"}, {"name": "generated_solution", "dtype": "string"}, {"name": "expected_answer", "dtype": "string"}, {"name": "problem_source", "dtype": "string"}], "splits": [{"name": "train_1M", "num_bytes": 1350383003, "num_examples": 1000000}, {"name": "train_2M", "num_bytes": 2760009675, "num_examples": 2000000}, {"name": "train_5M", "num_bytes": 6546496157, "num_examples": 5000000}, {"name": "train", "num_bytes": 15558412976, "num_examples": 13972791}], "download_size": 20208929853, "dataset_size": 26215301811}, "tags": ["math", "nvidia"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "train_1M", "path": "data/train_1M-*"}, {"split": "train_2M", "path": "data/train_2M-*"}, {"split": "train_5M", "path": "data/train_5M-*"}]}]} | false | False | 2024-11-01T22:04:33.000Z | 105 | 11 | false | ac3d019aa67043f0f25cce7eed8f5926fe580c5a |
OpenMathInstruct-2
OpenMathInstruct-2 is a math instruction tuning dataset with 14M problem-solution pairs
generated using the Llama3.1-405B-Instruct model.
The training set problems of GSM8K
and MATH are used for constructing the dataset in the following ways:
Solution augmentation: Generating chain-of-thought solutions for training set problems in GSM8K and MATH.
Problem-Solution augmentation: Generating new problems, followed by solutions for these new problems.β¦ See the full description on the dataset page: https://huggingface.co/datasets/nvidia/OpenMathInstruct-2. | 15,043 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2410.01560",
"region:us",
"math",
"nvidia"
] | 2024-09-28T16:37:52.000Z | null | null |
|
66952974b8a00bc24d6b112a | HuggingFaceTB/smollm-corpus | HuggingFaceTB | {"license": "odc-by", "dataset_info": [{"config_name": "cosmopedia-v2", "features": [{"name": "prompt", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "token_length", "dtype": "int64"}, {"name": "audience", "dtype": "string"}, {"name": "format", "dtype": "string"}, {"name": "seed_data", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 212503640747, "num_examples": 39134000}], "download_size": 122361137711, "dataset_size": 212503640747}, {"config_name": "fineweb-edu-dedup", "features": [{"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "dump", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "date", "dtype": "timestamp[s]"}, {"name": "file_path", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "language_score", "dtype": "float64"}, {"name": "token_count", "dtype": "int64"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 957570164451, "num_examples": 190168005}], "download_size": 550069279849, "dataset_size": 957570164451}, {"config_name": "python-edu", "features": [{"name": "blob_id", "dtype": "string"}, {"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "length_bytes", "dtype": "int64"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 989334135, "num_examples": 7678448}], "download_size": 643903049, "dataset_size": 989334135}], "configs": [{"config_name": "cosmopedia-v2", "data_files": [{"split": "train", "path": "cosmopedia-v2/train-*"}]}, {"config_name": "fineweb-edu-dedup", "data_files": [{"split": "train", "path": "fineweb-edu-dedup/train-*"}]}, {"config_name": "python-edu", "data_files": [{"split": "train", "path": "python-edu/train-*"}]}], "language": ["en"]} | false | False | 2024-09-06T07:04:57.000Z | 239 | 9 | false | 3ba9d605774198c5868892d7a8deda78031a781f |
SmolLM-Corpus
This dataset is a curated collection of high-quality educational and synthetic data designed for training small language models.
You can find more details about the models trained on this dataset in our SmolLM blog post.
Dataset subsets
Cosmopedia v2
Cosmopedia v2 is an enhanced version of Cosmopedia, the largest synthetic dataset for pre-training, consisting of over 39 million textbooks, blog posts, and stories generated by⦠See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus. | 29,314 | [
"language:en",
"license:odc-by",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-07-15T13:51:48.000Z | null | null |
|
66a48190424f6ad0636bbd70 | vikhyatk/lofi | vikhyatk | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "prompt", "dtype": "string"}]}, "license": "cc-by-nc-4.0"} | false | False | 2024-10-26T20:42:55.000Z | 69 | 9 | false | 966a2d3065aac26c0385b4ef2d50983c0429a305 | 7,000+ hours of lofi music generated by MusicGen Large, with diverse prompts. The prompts were sampled from Llama 3.1 8B Base, starting with a seed set of 1,960 handwritten prompts of which a random 16 are used in a few-shot setting to generate additional diverse prompts.
In addition to the CC-BY-NC license, by using this dataset you are agreeing to the fact that the Pleiades star system is a binary system and that any claim otherwise is a lie.
What people are saying
this⦠See the full description on the dataset page: https://huggingface.co/datasets/vikhyatk/lofi. | 2,741 | [
"license:cc-by-nc-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-07-27T05:11:44.000Z | null | null |
|
671928371e52d113736171a4 | ClimatePolicyRadar/all-document-text-data | ClimatePolicyRadar | {"license": "cc-by-4.0", "size_categories": ["10M<n<100M"]} | false | auto | 2024-10-28T12:00:00.000Z | 10 | 9 | false | 13d13430311b09d3f58676625a0e38c61f66355c |
Climate Policy Radar Open Data
This repo contains the full text data of all of the documents from the Climate Policy Radar database (CPR), which is also available at Climate Change Laws of the World (CCLW).
Please note that this replaces the Global Stocktake open dataset: that data, including all NDCs and IPCC reports is now a subset of this dataset.
Whatβs in this dataset
This dataset contains two corpus types (groups of the same types or sources of documents)β¦ See the full description on the dataset page: https://huggingface.co/datasets/ClimatePolicyRadar/all-document-text-data. | 45 | [
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-10-23T16:45:43.000Z | null | null |
|
653785ff8e37b02865e64be0 | HuggingFaceH4/ultrafeedback_binarized | HuggingFaceH4 | {"language": ["en"], "license": "mit", "task_categories": ["text-generation"], "pretty_name": "UltraFeedback Binarized", "configs": [{"config_name": "default", "data_files": [{"split": "train_prefs", "path": "data/train_prefs-*"}, {"split": "train_sft", "path": "data/train_sft-*"}, {"split": "test_prefs", "path": "data/test_prefs-*"}, {"split": "test_sft", "path": "data/test_sft-*"}, {"split": "train_gen", "path": "data/train_gen-*"}, {"split": "test_gen", "path": "data/test_gen-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "score_chosen", "dtype": "float64"}, {"name": "score_rejected", "dtype": "float64"}], "splits": [{"name": "train_prefs", "num_bytes": 405688662, "num_examples": 61135}, {"name": "train_sft", "num_bytes": 405688662, "num_examples": 61135}, {"name": "test_prefs", "num_bytes": 13161585, "num_examples": 2000}, {"name": "test_sft", "num_bytes": 6697333, "num_examples": 1000}, {"name": "train_gen", "num_bytes": 325040536, "num_examples": 61135}, {"name": "test_gen", "num_bytes": 5337695, "num_examples": 1000}], "download_size": 649967196, "dataset_size": 1161614473}} | false | False | 2024-10-16T11:49:06.000Z | 238 | 8 | false | 3949bf5f8c17c394422ccfab0c31ea9c20bdeb85 |
Dataset Card for UltraFeedback Binarized
Dataset Description
This is a pre-processed version of the UltraFeedback dataset and was used to train Zephyr-7Ξ-Ξ², a state of the art chat model at the 7B parameter scale.
The original UltraFeedback dataset consists of 64k prompts, where each prompt is accompanied with four model completions from a wide variety of open and proprietary models. GPT-4 is then used to assign a score to each completion, along criteria like⦠See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized. | 5,982 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2310.01377",
"region:us"
] | 2023-10-24T08:53:19.000Z | null | null |
|
6703a9b1dfea46624547b361 | Sterzhang/PVIT-3M | Sterzhang | {"configs": [{"config_name": "PVIT-3M", "data_files": [{"split": "all_data", "path": "PVIT-3M.json"}]}], "language": ["en"], "task_categories": ["visual-question-answering", "image-text-to-text"], "tags": ["multi-modal", "personalized"], "license": "apache-2.0", "pretty_name": "personalized visual instruction tuning", "size_categories": ["1M<n<10M"]} | false | False | 2024-11-02T07:41:57.000Z | 13 | 8 | false | 68c0ad34851b06e7e408b092c1f8ee1004f6c92b |
PVIT-3M
The paper titled "Personalized Visual Instruction Tuning" introduces a novel dataset called PVIT-3M. This dataset is specifically designed for tuning MLLMs in the context of personalized visual instruction tasks. The dataset consists of 3 million image-text pairs that aim to improve MLLMs' abilities to generate responses based on personalized visual inputs, making them more tailored and adaptable to individual user needs and preferences.
Hereβs the PVIT-3M statistics:β¦ See the full description on the dataset page: https://huggingface.co/datasets/Sterzhang/PVIT-3M. | 40,634 | [
"task_categories:visual-question-answering",
"task_categories:image-text-to-text",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"arxiv:2410.07113",
"region:us",
"multi-modal",
"personalized"
] | 2024-10-07T09:28:17.000Z | null | null |
|
670bd71d721603bf001c0399 | opencsg/chinese-fineweb-edu-v2 | opencsg | {"language": ["zh"], "pipeline_tag": "text-generation", "license": "apache-2.0", "task_categories": ["text-generation"], "size_categories": ["10B<n<100B"]} | false | False | 2024-10-26T04:51:41.000Z | 39 | 8 | false | bd123e34c706a1b34274a79e1e1cd81b18cda5cc |
Chinese Fineweb Edu Dataset V2 [δΈζ] [English]
[OpenCSG Community] [github] [wechat] [Twitter]
Chinese Fineweb Edu Dataset V2 is a comprehensive upgrade of the original Chinese Fineweb Edu, designed and optimized for natural language processing (NLP) tasks in the education sector. This high-quality Chinese pretraining dataset has undergone significant improvements and expansions, aimed at providing researchers and developers with more diverse and broadly⦠See the full description on the dataset page: https://huggingface.co/datasets/opencsg/chinese-fineweb-edu-v2. | 23,051 | [
"task_categories:text-generation",
"language:zh",
"license:apache-2.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-10-13T14:20:13.000Z | null | null |
|
6718c7eb95693d6c54671278 | marcelbinz/Psych-101 | marcelbinz | {"license": "apache-2.0", "language": ["en"], "tags": ["Psychology"], "pretty_name": "Psych-101", "size_categories": ["100B<n<1T"]} | false | False | 2024-11-02T16:43:37.000Z | 33 | 8 | false | 611565c66395e2787cd7e3305149bb75dc138024 |
Dataset Summary
Psych-101 is a data set of natural language transcripts from human psychological experiments.
It comprises trial-by-trial data from 160 psychological experiments and 60,092 participants, making 10,681,650 choices.
Human choices are encapsuled in "<<" and ">>" tokens.
Paper: Centaur: a foundation model of human cognition
Point of Contact: Marcel Binz
Example Prompt
You will be presented with triplets of objects, which will be assigned to the⦠See the full description on the dataset page: https://huggingface.co/datasets/marcelbinz/Psych-101. | 166 | [
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2410.20268",
"region:us",
"Psychology"
] | 2024-10-23T09:54:51.000Z | null | null |
|
625552d2b339bb03abe3432d | openai/gsm8k | openai | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "gsm8k", "pretty_name": "Grade School Math 8K", "tags": ["math-word-problems"], "dataset_info": [{"config_name": "main", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3963202, "num_examples": 7473}, {"name": "test", "num_bytes": 713732, "num_examples": 1319}], "download_size": 2725633, "dataset_size": 4676934}, {"config_name": "socratic", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5198108, "num_examples": 7473}, {"name": "test", "num_bytes": 936859, "num_examples": 1319}], "download_size": 3164254, "dataset_size": 6134967}], "configs": [{"config_name": "main", "data_files": [{"split": "train", "path": "main/train-*"}, {"split": "test", "path": "main/test-*"}]}, {"config_name": "socratic", "data_files": [{"split": "train", "path": "socratic/train-*"}, {"split": "test", "path": "socratic/test-*"}]}]} | false | False | 2024-01-04T12:05:15.000Z | 408 | 7 | false | e53f048856ff4f594e959d75785d2c2d37b678ee |
Dataset Card for GSM8K
Dataset Summary
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
These problems take between 2 and 8 steps to solve.
Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ β ΓΓ·) toβ¦ See the full description on the dataset page: https://huggingface.co/datasets/openai/gsm8k. | 201,239 | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2110.14168",
"region:us",
"math-word-problems"
] | 2022-04-12T10:22:10.000Z | null | gsm8k |
|
639244f571c51c43091df168 | Anthropic/hh-rlhf | Anthropic | {"license": "mit", "tags": ["human-feedback"]} | false | False | 2023-05-26T18:47:34.000Z | 1,198 | 7 | false | 09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa |
Dataset Card for HH-RLHF
Dataset Summary
This repository provides access to two different kinds of data:
Human preference data about helpfulness and harmlessness from Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. These data are meant to train preference (or reward) models for subsequent RLHF training. These data are not meant for supervised training of dialogue agents. Training dialogue agents on these data is likely⦠See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/hh-rlhf. | 8,648 | [
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2204.05862",
"region:us",
"human-feedback"
] | 2022-12-08T20:11:33.000Z | null | null |
|
66558cea3e96e1c5975420f6 | OpenGVLab/ShareGPT-4o | OpenGVLab | {"license": "mit", "extra_gated_prompt": "You agree to not use the dataset to conduct experiments that cause harm to human subjects. Please note that the data in this dataset may be subject to other agreements. Before using the data, be sure to read the relevant agreements carefully to ensure compliant use. Video copyrights belong to the original video creators or platforms and are for academic research use only.", "task_categories": ["visual-question-answering", "question-answering"], "extra_gated_fields": {"Name": "text", "Company/Organization": "text", "Country": "text", "E-Mail": "text"}, "language": ["en"], "size_categories": ["100K<n<1M"], "configs": [{"config_name": "image_caption", "data_files": [{"split": "images", "path": "image_conversations/gpt-4o.jsonl"}]}, {"config_name": "video_caption", "data_files": [{"split": "ptest", "path": "video_conversations/gpt4o.jsonl"}]}]} | false | auto | 2024-08-17T07:51:28.000Z | 141 | 7 | false | a69d5b4d2c5343146e27b46a22638d346f14f013 | null | 8,920 | [
"task_categories:visual-question-answering",
"task_categories:question-answering",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-05-28T07:51:06.000Z | null | null |
|
6655eb19d17e141dcb546ed5 | HuggingFaceFW/fineweb-edu | HuggingFaceFW | {"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "FineWeb-Edu", "size_categories": ["n>1T"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*/*"}]}, {"config_name": "sample-10BT", "data_files": [{"split": "train", "path": "sample/10BT/*"}]}, {"config_name": "sample-100BT", "data_files": [{"split": "train", "path": "sample/100BT/*"}]}, {"config_name": "sample-350BT", "data_files": [{"split": "train", "path": "sample/350BT/*"}]}, {"config_name": "CC-MAIN-2024-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-10/*"}]}, {"config_name": "CC-MAIN-2023-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-50/*"}]}, {"config_name": "CC-MAIN-2023-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-40/*"}]}, {"config_name": "CC-MAIN-2023-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-23/*"}]}, {"config_name": "CC-MAIN-2023-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-14/*"}]}, {"config_name": "CC-MAIN-2023-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-06/*"}]}, {"config_name": "CC-MAIN-2022-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-49/*"}]}, {"config_name": "CC-MAIN-2022-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-40/*"}]}, {"config_name": "CC-MAIN-2022-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-33/*"}]}, {"config_name": "CC-MAIN-2022-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-27/*"}]}, {"config_name": "CC-MAIN-2022-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-21/*"}]}, {"config_name": "CC-MAIN-2022-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-05/*"}]}, {"config_name": "CC-MAIN-2021-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-49/*"}]}, {"config_name": "CC-MAIN-2021-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-43/*"}]}, {"config_name": "CC-MAIN-2021-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-39/*"}]}, {"config_name": "CC-MAIN-2021-31", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-31/*"}]}, {"config_name": "CC-MAIN-2021-25", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-25/*"}]}, {"config_name": "CC-MAIN-2021-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-21/*"}]}, {"config_name": "CC-MAIN-2021-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-17/*"}]}, {"config_name": "CC-MAIN-2021-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-10/*"}]}, {"config_name": "CC-MAIN-2021-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-04/*"}]}, {"config_name": "CC-MAIN-2020-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-50/*"}]}, {"config_name": "CC-MAIN-2020-45", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-45/*"}]}, {"config_name": "CC-MAIN-2020-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-40/*"}]}, {"config_name": "CC-MAIN-2020-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-34/*"}]}, {"config_name": "CC-MAIN-2020-29", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-29/*"}]}, {"config_name": "CC-MAIN-2020-24", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-24/*"}]}, {"config_name": "CC-MAIN-2020-16", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-16/*"}]}, {"config_name": "CC-MAIN-2020-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-10/*"}]}, {"config_name": "CC-MAIN-2020-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-05/*"}]}, {"config_name": "CC-MAIN-2019-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-51/*"}]}, {"config_name": "CC-MAIN-2019-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-47/*"}]}, {"config_name": "CC-MAIN-2019-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-43/*"}]}, {"config_name": "CC-MAIN-2019-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-39/*"}]}, {"config_name": "CC-MAIN-2019-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-35/*"}]}, {"config_name": "CC-MAIN-2019-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-30/*"}]}, {"config_name": "CC-MAIN-2019-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-26/*"}]}, {"config_name": "CC-MAIN-2019-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-22/*"}]}, {"config_name": "CC-MAIN-2019-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-18/*"}]}, {"config_name": "CC-MAIN-2019-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-13/*"}]}, {"config_name": "CC-MAIN-2019-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-09/*"}]}, {"config_name": "CC-MAIN-2019-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-04/*"}]}, {"config_name": "CC-MAIN-2018-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-51/*"}]}, {"config_name": "CC-MAIN-2018-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-47/*"}]}, {"config_name": "CC-MAIN-2018-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-43/*"}]}, {"config_name": "CC-MAIN-2018-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-39/*"}]}, {"config_name": "CC-MAIN-2018-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-34/*"}]}, {"config_name": "CC-MAIN-2018-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-30/*"}]}, {"config_name": "CC-MAIN-2018-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-26/*"}]}, {"config_name": "CC-MAIN-2018-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-22/*"}]}, {"config_name": "CC-MAIN-2018-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-17/*"}]}, {"config_name": "CC-MAIN-2018-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-13/*"}]}, {"config_name": "CC-MAIN-2018-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-09/*"}]}, {"config_name": "CC-MAIN-2018-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-05/*"}]}, {"config_name": "CC-MAIN-2017-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-51/*"}]}, {"config_name": "CC-MAIN-2017-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-47/*"}]}, {"config_name": "CC-MAIN-2017-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-43/*"}]}, {"config_name": "CC-MAIN-2017-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-39/*"}]}, {"config_name": "CC-MAIN-2017-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-34/*"}]}, {"config_name": "CC-MAIN-2017-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-30/*"}]}, {"config_name": "CC-MAIN-2017-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-26/*"}]}, {"config_name": "CC-MAIN-2017-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-22/*"}]}, {"config_name": "CC-MAIN-2017-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-17/*"}]}, {"config_name": "CC-MAIN-2017-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-13/*"}]}, {"config_name": "CC-MAIN-2017-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-09/*"}]}, {"config_name": "CC-MAIN-2017-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-04/*"}]}, {"config_name": "CC-MAIN-2016-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-50/*"}]}, {"config_name": "CC-MAIN-2016-44", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-44/*"}]}, {"config_name": "CC-MAIN-2016-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-40/*"}]}, {"config_name": "CC-MAIN-2016-36", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-36/*"}]}, {"config_name": "CC-MAIN-2016-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-30/*"}]}, {"config_name": "CC-MAIN-2016-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-26/*"}]}, {"config_name": "CC-MAIN-2016-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-22/*"}]}, {"config_name": "CC-MAIN-2016-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-18/*"}]}, {"config_name": "CC-MAIN-2016-07", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-07/*"}]}, {"config_name": "CC-MAIN-2015-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-48/*"}]}, {"config_name": "CC-MAIN-2015-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-40/*"}]}, {"config_name": "CC-MAIN-2015-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-35/*"}]}, {"config_name": "CC-MAIN-2015-32", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-32/*"}]}, {"config_name": "CC-MAIN-2015-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-27/*"}]}, {"config_name": "CC-MAIN-2015-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-22/*"}]}, {"config_name": "CC-MAIN-2015-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-18/*"}]}, {"config_name": "CC-MAIN-2015-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-14/*"}]}, {"config_name": "CC-MAIN-2015-11", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-11/*"}]}, {"config_name": "CC-MAIN-2015-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-06/*"}]}, {"config_name": "CC-MAIN-2014-52", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-52/*"}]}, {"config_name": "CC-MAIN-2014-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-49/*"}]}, {"config_name": "CC-MAIN-2014-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-42/*"}]}, {"config_name": "CC-MAIN-2014-41", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-41/*"}]}, {"config_name": "CC-MAIN-2014-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-35/*"}]}, {"config_name": "CC-MAIN-2014-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-23/*"}]}, {"config_name": "CC-MAIN-2014-15", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-15/*"}]}, {"config_name": "CC-MAIN-2014-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-10/*"}]}, {"config_name": "CC-MAIN-2013-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-48/*"}]}, {"config_name": "CC-MAIN-2013-20", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-20/*"}]}]} | false | False | 2024-10-11T07:55:10.000Z | 527 | 7 | false | 651a648da38bf545cc5487530dbf59d8168c8de3 |
π FineWeb-Edu
1.3 trillion tokens of the finest educational data the π web has to offer
Paper: https://arxiv.org/abs/2406.17557
What is it?
π FineWeb-Edu dataset consists of 1.3T tokens and 5.4T tokens (FineWeb-Edu-score-2) of educational web pages filtered from π· FineWeb dataset. This is the 1.3 trillion version.
To enhance FineWeb's quality, we developed an educational quality classifier using annotations generated by LLama3-70B-Instruct. We⦠See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu. | 555,451 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:1B<n<10B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.17557",
"arxiv:2404.14219",
"arxiv:2401.10020",
"arxiv:2109.07445",
"doi:10.57967/hf/2497",
"region:us"
] | 2024-05-28T14:32:57.000Z | null | null |
|
666ae33f611afe17cd982829 | BAAI/Infinity-Instruct | BAAI | {"configs": [{"config_name": "3M", "data_files": [{"split": "train", "path": "3M/*"}]}, {"config_name": "7M", "data_files": [{"split": "train", "path": "7M/*"}]}, {"config_name": "0625", "data_files": [{"split": "train", "path": "0625/*"}]}, {"config_name": "Gen", "data_files": [{"split": "train", "path": "Gen/*"}]}, {"config_name": "7M_domains", "data_files": [{"split": "train", "path": "7M_domains/*/*"}]}], "task_categories": ["text-generation"], "language": ["en", "zh"], "size_categories": ["1M<n<10M"], "license": "cc-by-sa-4.0", "extra_gated_prompt": "You agree to not use the dataset to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Company/Organization": "text", "Country": "country"}} | false | auto | 2024-10-31T15:06:59.000Z | 542 | 7 | false | 05cd7e304312b9afc9c4cb5817927805554af437 |
Infinity Instruct
Beijing Academy of Artificial Intelligence (BAAI)
[Paper][Code][π€] (would be released soon)
The quality and scale of instruction data are crucial for model performance. Recently, open-source models have increasingly relied on fine-tuning datasets comprising millions of instances, necessitating both high quality and large scale. However, the open-source community has long been constrained by the high costs associated with building such extensive and⦠See the full description on the dataset page: https://huggingface.co/datasets/BAAI/Infinity-Instruct. | 7,720 | [
"task_categories:text-generation",
"language:en",
"language:zh",
"license:cc-by-sa-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2402.00530",
"arxiv:2405.19327",
"arxiv:2409.07045",
"arxiv:2408.07089",
"region:us"
] | 2024-06-13T12:17:03.000Z | null | null |
|
6727611f89116e24a4fc40a8 | selimc/InstructPapers-TR | selimc | {"license": "apache-2.0", "task_categories": ["text-generation", "text2text-generation", "question-answering"], "language": ["tr"], "tags": ["turkish", "academic-papers", "question-answering", "research", "dergipark"], "pretty_name": "InstructPapers-TR Dataset", "size_categories": ["1K<n<10K"]} | false | False | 2024-11-04T15:01:27.000Z | 7 | 7 | false | d45417369abcc8853c39c79acdd83e8bd9314fdf |
A specialized question-answering dataset derived from publicly available Turkish academic papers published on DergiPark.
The dataset contains synthetic QA pairs generated using the gemini-1.5-flash-002 model.
Each entry has metadata including the source paper's title, topic, and DergiPark URL.
Dataset Info
Number of Instances: ~11k
Dataset Size: 9.89 MB
Language: Turkish
Dataset License: apache-2.0
Dataset Category: Text2Text Generation
Data Fields⦠See the full description on the dataset page: https://huggingface.co/datasets/selimc/InstructPapers-TR. | 18 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:question-answering",
"language:tr",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"turkish",
"academic-papers",
"question-answering",
"research",
"dergipark"
] | 2024-11-03T11:40:15.000Z | null | null |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 829