Datasets:
Tasks:
Text Classification
Sub-tasks:
topic-classification
Languages:
Romanian
Size:
10K<n<100K
ArXiv:
License:
Commit
•
f49ef35
0
Parent(s):
Update files from the datasets library (from 1.6.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.6.0
- .gitattributes +27 -0
- README.md +172 -0
- dataset_infos.json +1 -0
- dummy/moroco/1.0.0/dummy_data.zip +3 -0
- moroco.py +172 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- found
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- ro
|
8 |
+
- ro-md
|
9 |
+
licenses:
|
10 |
+
- cc-by-4-0
|
11 |
+
multilinguality:
|
12 |
+
- monolingual
|
13 |
+
size_categories:
|
14 |
+
- 10K<n<100K
|
15 |
+
source_datasets:
|
16 |
+
- original
|
17 |
+
task_categories:
|
18 |
+
- text-classification
|
19 |
+
task_ids:
|
20 |
+
- topic-classification
|
21 |
+
---
|
22 |
+
|
23 |
+
# Dataset Card for MOROCO
|
24 |
+
|
25 |
+
## Table of Contents
|
26 |
+
- [Dataset Description](#dataset-description)
|
27 |
+
- [Dataset Summary](#dataset-summary)
|
28 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
29 |
+
- [Languages](#languages)
|
30 |
+
- [Dataset Structure](#dataset-structure)
|
31 |
+
- [Data Instances](#data-instances)
|
32 |
+
- [Data Fields](#data-instances)
|
33 |
+
- [Data Splits](#data-instances)
|
34 |
+
- [Dataset Creation](#dataset-creation)
|
35 |
+
- [Curation Rationale](#curation-rationale)
|
36 |
+
- [Source Data](#source-data)
|
37 |
+
- [Annotations](#annotations)
|
38 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
39 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
40 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
41 |
+
- [Discussion of Biases](#discussion-of-biases)
|
42 |
+
- [Other Known Limitations](#other-known-limitations)
|
43 |
+
- [Additional Information](#additional-information)
|
44 |
+
- [Dataset Curators](#dataset-curators)
|
45 |
+
- [Licensing Information](#licensing-information)
|
46 |
+
- [Citation Information](#citation-information)
|
47 |
+
- [Contributions](#contributions)
|
48 |
+
|
49 |
+
## Dataset Description
|
50 |
+
|
51 |
+
- **Homepage:** [Github](https://github.com/butnaruandrei/MOROCO)
|
52 |
+
- **Repository:** [Github](https://github.com/butnaruandrei/MOROCO)
|
53 |
+
- **Paper:** [Arxiv](https://arxiv.org/abs/1901.06543)
|
54 |
+
- **Leaderboard:** [Needs More Information]
|
55 |
+
- **Point of Contact:** raducu.ionescu@gmail.com
|
56 |
+
|
57 |
+
### Dataset Summary
|
58 |
+
|
59 |
+
Introducing MOROCO - The **Mo**ldavian and **Ro**manian Dialectal **Co**rpus. The MOROCO data set contains Moldavian and Romanian samples of text collected from the news domain. The samples belong to one of the following six topics: (0) culture, (1) finance, (2) politics, (3) science, (4) sports, (5) tech. The corpus features a total of 33,564 samples labelled with one of the fore mentioned six categories. We are also including a train/validation/test split with 21,719/5,921/5,924 samples in each subset.
|
60 |
+
|
61 |
+
### Supported Tasks and Leaderboards
|
62 |
+
|
63 |
+
[LiRo Benchmark and Leaderboard](https://eemlcommunity.github.io/ro_benchmark_leaderboard/site/)
|
64 |
+
|
65 |
+
### Languages
|
66 |
+
|
67 |
+
The text dataset is in Romanian (`ro`)
|
68 |
+
|
69 |
+
## Dataset Structure
|
70 |
+
|
71 |
+
### Data Instances
|
72 |
+
|
73 |
+
Below we have an example of sample from MOROCO:
|
74 |
+
|
75 |
+
```
|
76 |
+
{'id': , '48482',
|
77 |
+
'category': 2,
|
78 |
+
'sample': '“$NE$ cum am spus, nu este un sfârşit de drum . Vom continua lupta cu toate instrumentele şi cu toate mijloacele legale, parlamentare şi civice pe care le avem la dispoziţie . Evident că vom contesta la $NE$ această lege, au anunţat şi colegii de la $NE$ o astfel de contestaţie . Practic trebuie utilizat orice instrument pe care îl identificăm pentru a bloca intrarea în vigoare a acestei legi . Bineînţeles, şi preşedintele are punctul său de vedere . ( . . . ) $NE$ legi sunt împănate de motive de neconstituţionalitate . Colegii mei de la departamentul juridic lucrează în prezent pentru a definitiva textul contestaţiei”, a declarat $NE$ $NE$ citat de news . ro . Senatul a adoptat, marţi, în calitate de for decizional, $NE$ privind statutul judecătorilor şi procurorilor, cu 80 de voturi ”pentru” şi niciun vot ”împotrivă”, în condiţiile în care niciun partid din opoziţie nu a fost prezent în sală .',
|
79 |
+
}
|
80 |
+
```
|
81 |
+
|
82 |
+
where 48482 is the sample ID, followed by the category ground truth label, and then the text representing the actual content to be classified by topic.
|
83 |
+
|
84 |
+
Note: The category label has integer values ranging from 0 to 5.
|
85 |
+
|
86 |
+
|
87 |
+
### Data Fields
|
88 |
+
|
89 |
+
- `id`: string, the unique indentifier of a sample
|
90 |
+
- `category_label`: integer in the range [0, 5]; the category assigned to a sample.
|
91 |
+
- `sample`: a string, news report to be classified / used in classification.
|
92 |
+
|
93 |
+
### Data Splits
|
94 |
+
|
95 |
+
The train/validation/test split contains 21,719/5,921/5,924 samples tagged with the category assigned to each sample in the dataset.
|
96 |
+
|
97 |
+
## Dataset Creation
|
98 |
+
|
99 |
+
### Curation Rationale
|
100 |
+
|
101 |
+
The samples are preprocessed in order to eliminate named entities. This is required to prevent classifiers from taking the decision based on features that are not related to the topics.
|
102 |
+
For example, named entities that refer to politicians or football players names can provide clues about the topic. For more details, please read the [paper](https://arxiv.org/abs/1901.06543).
|
103 |
+
|
104 |
+
### Source Data
|
105 |
+
|
106 |
+
|
107 |
+
#### Data Collection and Normalization
|
108 |
+
|
109 |
+
For the data collection, five of the most popular news websites in Romania and the Republic of Moldova were targetted. Given that the data set was obtained through a web scraping technique, all the HTML tags needed to be removed, as well as replace consecutive white spaces with a single space.
|
110 |
+
|
111 |
+
As part of the pre-processing, we remove named entities, such as country names, cities, public figures, etc. The named entities have been replaced with $NE$. The necessity to remove them, comes also from the scope of this dataset: categorization by topic. Thus, the authors decided to remove named entities in order to prevent classifiers from taking the decision based on features that are not truly indicative of the topics.
|
112 |
+
|
113 |
+
#### Who are the source language producers?
|
114 |
+
|
115 |
+
The original text comes from news websites from Romania and the Republic of Moldova.
|
116 |
+
|
117 |
+
### Annotations
|
118 |
+
|
119 |
+
#### Annotation process
|
120 |
+
|
121 |
+
As mentioned above, MOROCO is composed of text samples from the top five most popular news websites in Romania and the Republic of Moldova, respectively. Since there are topic tags in the news websites targetd, the text samples can be automatically labeled with the corresponding category.
|
122 |
+
|
123 |
+
#### Who are the annotators?
|
124 |
+
|
125 |
+
N/A
|
126 |
+
|
127 |
+
### Personal and Sensitive Information
|
128 |
+
|
129 |
+
The textual data collected for MOROCO consists in news reports freely available on the Internet and of public interest.
|
130 |
+
To the best of authors' knowledge, there is no personal or sensitive information that needed to be considered in the said textual inputs collected.
|
131 |
+
|
132 |
+
## Considerations for Using the Data
|
133 |
+
|
134 |
+
### Social Impact of Dataset
|
135 |
+
|
136 |
+
This dataset is part of an effort to encourage text classification research in languages other than English. Such work increases the accessibility of natural language technology to more regions and cultures.
|
137 |
+
In the past three years there was a growing interest for studying Romanian from a Computational Linguistics perspective. However, we are far from having enough datasets and resources in this particular language.
|
138 |
+
|
139 |
+
### Discussion of Biases
|
140 |
+
|
141 |
+
The data included in MOROCO spans a well defined time frame of a few years. Part of the topics that were of interest then in the news landscape, might not show up nowadays or a few years from now in news websites.
|
142 |
+
|
143 |
+
### Other Known Limitations
|
144 |
+
|
145 |
+
[Needs More Information]
|
146 |
+
|
147 |
+
## Additional Information
|
148 |
+
|
149 |
+
### Dataset Curators
|
150 |
+
|
151 |
+
Published and managed by Radu Tudor Ionescu and Andrei Butnaru.
|
152 |
+
|
153 |
+
### Licensing Information
|
154 |
+
|
155 |
+
CC BY-SA 4.0 License
|
156 |
+
|
157 |
+
### Citation Information
|
158 |
+
|
159 |
+
```
|
160 |
+
@inproceedings{ Butnaru-ACL-2019,
|
161 |
+
author = {Andrei M. Butnaru and Radu Tudor Ionescu},
|
162 |
+
title = "{MOROCO: The Moldavian and Romanian Dialectal Corpus}",
|
163 |
+
booktitle = {Proceedings of ACL},
|
164 |
+
year = {2019},
|
165 |
+
pages={688--698},
|
166 |
+
}
|
167 |
+
```
|
168 |
+
|
169 |
+
### Contributions
|
170 |
+
|
171 |
+
Thanks to [@MihaelaGaman](https://github.com/MihaelaGaman) for adding this dataset.
|
172 |
+
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"moroco": {"description": "The MOROCO (Moldavian and Romanian Dialectal Corpus) dataset contains 33564 samples of text collected from the news domain.\nThe samples belong to one of the following six topics:\n - culture\n - finance\n - politics\n - science\n - sports\n - tech\n", "citation": "@inproceedings{ Butnaru-ACL-2019,\n author = {Andrei M. Butnaru and Radu Tudor Ionescu},\n title = \"{MOROCO: The Moldavian and Romanian Dialectal Corpus}\",\n booktitle = {Proceedings of ACL},\n year = {2019},\n pages={688--698},\n}\n", "homepage": "https://github.com/butnaruandrei/MOROCO", "license": "CC BY-SA 4.0 License", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "category": {"num_classes": 6, "names": ["culture", "finance", "politics", "science", "sports", "tech"], "names_file": null, "id": null, "_type": "ClassLabel"}, "sample": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "moroco", "config_name": "moroco", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 39314292, "num_examples": 21719, "dataset_name": "moroco"}, "test": {"name": "test", "num_bytes": 10877813, "num_examples": 5924, "dataset_name": "moroco"}, "validation": {"name": "validation", "num_bytes": 10721304, "num_examples": 5921, "dataset_name": "moroco"}}, "download_checksums": {"https://raw.githubusercontent.com/butnaruandrei/MOROCO/master/MOROCO/preprocessed/all/train_samples.txt": {"num_bytes": 39010202, "checksum": "633fa23f51771d06baecde648f045bbfeb43ff561bb3143a5c86c87d6b9b9232"}, "https://raw.githubusercontent.com/butnaruandrei/MOROCO/master/MOROCO/preprocessed/all/train_category_labels.txt": {"num_bytes": 173752, "checksum": "67df722183c134e2f75fac9b9bba8c1b6e58b2ff7afc81e1024c4eb840926b60"}, "https://raw.githubusercontent.com/butnaruandrei/MOROCO/master/MOROCO/preprocessed/all/validation_samples.txt": {"num_bytes": 10638402, "checksum": "e3bacccf536882cb8defe0b59c6a8b7a63ce373f6be5b3dd61449cd5cbb0ee14"}, "https://raw.githubusercontent.com/butnaruandrei/MOROCO/master/MOROCO/preprocessed/all/validation_category_labels.txt": {"num_bytes": 47368, "checksum": "83887b7cc674ac1d776975191aff950db22f7cc5bb98bbc72a0058c7b46ed6df"}, "https://raw.githubusercontent.com/butnaruandrei/MOROCO/master/MOROCO/preprocessed/all/test_samples.txt": {"num_bytes": 10794869, "checksum": "a4a1e4e4303772bcbc8d72cc7475cf73ab1bdb1487f1c8e50f5771aecdef9eb1"}, "https://raw.githubusercontent.com/butnaruandrei/MOROCO/master/MOROCO/preprocessed/all/test_category_labels.txt": {"num_bytes": 47392, "checksum": "5e33e48b1811e3fb08e9fb232e3da8f829a353c37796c6bd41871e3e699635b8"}}, "download_size": 60711985, "post_processing_size": null, "dataset_size": 60913409, "size_in_bytes": 121625394}}
|
dummy/moroco/1.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1cd4ae1185c92efebf2460cbdb67bebd4f504eaf8567abaff7955fcb1f448130
|
3 |
+
size 11007
|
moroco.py
ADDED
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2021 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
"""MOROCO: The Moldavian and Romanian Dialectal Corpus"""
|
16 |
+
|
17 |
+
|
18 |
+
import datasets
|
19 |
+
|
20 |
+
|
21 |
+
# Find for instance the citation on arxiv or on the dataset repo/website
|
22 |
+
_CITATION = """\
|
23 |
+
@inproceedings{ Butnaru-ACL-2019,
|
24 |
+
author = {Andrei M. Butnaru and Radu Tudor Ionescu},
|
25 |
+
title = "{MOROCO: The Moldavian and Romanian Dialectal Corpus}",
|
26 |
+
booktitle = {Proceedings of ACL},
|
27 |
+
year = {2019},
|
28 |
+
pages={688--698},
|
29 |
+
}
|
30 |
+
"""
|
31 |
+
|
32 |
+
# You can copy an official description
|
33 |
+
_DESCRIPTION = """\
|
34 |
+
The MOROCO (Moldavian and Romanian Dialectal Corpus) dataset contains 33564 samples of text collected from the news domain.
|
35 |
+
The samples belong to one of the following six topics:
|
36 |
+
- culture
|
37 |
+
- finance
|
38 |
+
- politics
|
39 |
+
- science
|
40 |
+
- sports
|
41 |
+
- tech
|
42 |
+
"""
|
43 |
+
|
44 |
+
_HOMEPAGE = "https://github.com/butnaruandrei/MOROCO"
|
45 |
+
|
46 |
+
_LICENSE = "CC BY-SA 4.0 License"
|
47 |
+
|
48 |
+
# The HuggingFace dataset library don't host the datasets but only point to the original files
|
49 |
+
# This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
|
50 |
+
_URL = "https://raw.githubusercontent.com/butnaruandrei/MOROCO/master/MOROCO/preprocessed/all/"
|
51 |
+
|
52 |
+
_TRAIN_SAMPLES_FILE = "train_samples.txt"
|
53 |
+
_TRAIN_LABELS_FILE = "train_category_labels.txt"
|
54 |
+
|
55 |
+
_VAL_SAMPLES_FILE = "validation_samples.txt"
|
56 |
+
_VAL_LABELS_FILE = "validation_category_labels.txt"
|
57 |
+
|
58 |
+
_TEST_SAMPLES_FILE = "test_samples.txt"
|
59 |
+
_TEST_LABELS_FILE = "test_category_labels.txt"
|
60 |
+
|
61 |
+
|
62 |
+
class MOROCOConfig(datasets.BuilderConfig):
|
63 |
+
"""BuilderConfig for the MOROCO dataset"""
|
64 |
+
|
65 |
+
def __init__(self, **kwargs):
|
66 |
+
super(MOROCOConfig, self).__init__(**kwargs)
|
67 |
+
|
68 |
+
|
69 |
+
class MOROCO(datasets.GeneratorBasedBuilder):
|
70 |
+
"""MOROCO dataset"""
|
71 |
+
|
72 |
+
VERSION = datasets.Version("1.0.0")
|
73 |
+
BUILDER_CONFIGS = [
|
74 |
+
MOROCOConfig(name="moroco", version=VERSION, description="MOROCO dataset"),
|
75 |
+
]
|
76 |
+
|
77 |
+
def _info(self):
|
78 |
+
|
79 |
+
features = datasets.Features(
|
80 |
+
{
|
81 |
+
"id": datasets.Value("string"),
|
82 |
+
"category": datasets.features.ClassLabel(
|
83 |
+
names=[
|
84 |
+
"culture",
|
85 |
+
"finance",
|
86 |
+
"politics",
|
87 |
+
"science",
|
88 |
+
"sports",
|
89 |
+
"tech",
|
90 |
+
]
|
91 |
+
),
|
92 |
+
"sample": datasets.Value("string"),
|
93 |
+
}
|
94 |
+
)
|
95 |
+
|
96 |
+
return datasets.DatasetInfo(
|
97 |
+
# This is the description that will appear on the datasets page.
|
98 |
+
description=_DESCRIPTION,
|
99 |
+
# This defines the different columns of the dataset and their types
|
100 |
+
features=features, # Here we define them above because they are different between the two configurations
|
101 |
+
# If there's a common (input, target) tuple from the features,
|
102 |
+
# specify them here. They'll be used if as_supervised=True in
|
103 |
+
# builder.as_dataset.
|
104 |
+
supervised_keys=None,
|
105 |
+
# Homepage of the dataset for documentation
|
106 |
+
homepage=_HOMEPAGE,
|
107 |
+
# License for the dataset if available
|
108 |
+
license=_LICENSE,
|
109 |
+
# Citation for the dataset
|
110 |
+
citation=_CITATION,
|
111 |
+
)
|
112 |
+
|
113 |
+
def _split_generators(self, dl_manager):
|
114 |
+
"""Returns SplitGenerators."""
|
115 |
+
|
116 |
+
urls_to_download = {
|
117 |
+
"train_samples": _URL + _TRAIN_SAMPLES_FILE,
|
118 |
+
"train_labels": _URL + _TRAIN_LABELS_FILE,
|
119 |
+
"val_samples": _URL + _VAL_SAMPLES_FILE,
|
120 |
+
"val_labels": _URL + _VAL_LABELS_FILE,
|
121 |
+
"test_samples": _URL + _TEST_SAMPLES_FILE,
|
122 |
+
"test_labels": _URL + _TEST_LABELS_FILE,
|
123 |
+
}
|
124 |
+
|
125 |
+
downloaded_files = dl_manager.download(urls_to_download)
|
126 |
+
|
127 |
+
return [
|
128 |
+
datasets.SplitGenerator(
|
129 |
+
name=datasets.Split.TRAIN,
|
130 |
+
# These kwargs will be passed to _generate_examples
|
131 |
+
gen_kwargs={
|
132 |
+
"samples_filepath": downloaded_files["train_samples"],
|
133 |
+
"labels_filepath": downloaded_files["train_labels"],
|
134 |
+
},
|
135 |
+
),
|
136 |
+
datasets.SplitGenerator(
|
137 |
+
name=datasets.Split.TEST,
|
138 |
+
# These kwargs will be passed to _generate_examples
|
139 |
+
gen_kwargs={
|
140 |
+
"samples_filepath": downloaded_files["test_samples"],
|
141 |
+
"labels_filepath": downloaded_files["test_labels"],
|
142 |
+
},
|
143 |
+
),
|
144 |
+
datasets.SplitGenerator(
|
145 |
+
name=datasets.Split.VALIDATION,
|
146 |
+
# These kwargs will be passed to _generate_examples
|
147 |
+
gen_kwargs={
|
148 |
+
"samples_filepath": downloaded_files["val_samples"],
|
149 |
+
"labels_filepath": downloaded_files["val_labels"],
|
150 |
+
},
|
151 |
+
),
|
152 |
+
]
|
153 |
+
|
154 |
+
def _generate_examples(self, samples_filepath, labels_filepath):
|
155 |
+
"""This function returns the examples in the raw (text) form."""
|
156 |
+
|
157 |
+
with open(samples_filepath, "r", encoding="utf-8") as fsamples:
|
158 |
+
sample_rows = fsamples.read().splitlines()
|
159 |
+
|
160 |
+
with open(labels_filepath, "r", encoding="utf-8") as flabels:
|
161 |
+
label_rows = flabels.readlines()
|
162 |
+
|
163 |
+
for i, row in enumerate(sample_rows):
|
164 |
+
samp_id = row.split("\t")[0]
|
165 |
+
sample = "".join(row.split("\t")[1:])
|
166 |
+
label = int(label_rows[i].split("\t")[1])
|
167 |
+
|
168 |
+
yield i, {
|
169 |
+
"id": samp_id,
|
170 |
+
"category": label - 1,
|
171 |
+
"sample": sample,
|
172 |
+
}
|