Upload dataset
Browse files- README.md +128 -1
- raw.jsonl.gz +3 -0
- resh-edu.py +48 -0
README.md
CHANGED
@@ -1,3 +1,130 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- crowdsourced
|
4 |
+
language:
|
5 |
+
- ru
|
6 |
+
language_creators:
|
7 |
+
- crowdsourced
|
8 |
+
license:
|
9 |
+
- cc0-1.0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
pretty_name: resh.edu.ru
|
13 |
+
size_categories:
|
14 |
+
- 1K<n<10K
|
15 |
+
source_datasets:
|
16 |
+
- original
|
17 |
+
tags: []
|
18 |
+
task_categories:
|
19 |
+
- text-generation
|
20 |
+
- question-answering
|
21 |
+
task_ids:
|
22 |
+
- language-modeling
|
23 |
+
- open-domain-qa
|
24 |
---
|
25 |
+
|
26 |
+
# Dataset Card for resh.edu.ru
|
27 |
+
|
28 |
+
## Table of Contents
|
29 |
+
- [Dataset Card for resh.edu.ru](#dataset-card-for-resheduru)
|
30 |
+
- [Table of Contents](#table-of-contents)
|
31 |
+
- [Dataset Description](#dataset-description)
|
32 |
+
- [Dataset Summary](#dataset-summary)
|
33 |
+
- [Languages](#languages)
|
34 |
+
- [Dataset Structure](#dataset-structure)
|
35 |
+
- [Data Fields](#data-fields)
|
36 |
+
- [Data Splits](#data-splits)
|
37 |
+
- [Dataset Creation](#dataset-creation)
|
38 |
+
- [Additional Information](#additional-information)
|
39 |
+
- [Dataset Curators](#dataset-curators)
|
40 |
+
- [Licensing information](#licensing-information)
|
41 |
+
|
42 |
+
## Dataset Description
|
43 |
+
|
44 |
+
- **Repository:** https://github.com/its5Q/resh-edu
|
45 |
+
|
46 |
+
### Dataset Summary
|
47 |
+
|
48 |
+
This is a dataset of lessons scraped from [resh.edu.ru](https://resh.edu.ru/). There are 7260 lessons with some metadata, the summary of those lessons, and some training excersices.
|
49 |
+
The raw unprocessed dataset is stored in `raw.jsonl.gz`. A processed version for causal language modeling will be out in the coming days.
|
50 |
+
|
51 |
+
### Languages
|
52 |
+
|
53 |
+
The dataset is in Russian, unless the lesson subject is foreign languages (there are Chinese, German, English and other lessons).
|
54 |
+
|
55 |
+
## Dataset Structure
|
56 |
+
|
57 |
+
### Data Fields
|
58 |
+
|
59 |
+
The lessons in the dataset should be in the following structure:
|
60 |
+
- `id` - lesson's ID (`int`)
|
61 |
+
- `format` - specifies the lesson format that was parsed (old or new) (`string`)
|
62 |
+
- `subject` - which subject does the lesson belong to (`string`)
|
63 |
+
- `title` - lesson's title (`string`)
|
64 |
+
- `author` - lesson's author name (empty in new lessons) (`string`)
|
65 |
+
- `grade` - lesson's grade (`int`)
|
66 |
+
- `summary` - lesson's summary in HTML (`string`)
|
67 |
+
- `excercises` - list of training excercises for the lesson. The structure of excercises varies on their type, and is described below.
|
68 |
+
|
69 |
+
All excercises have the following fields:
|
70 |
+
- `id` - excercise ID (`int`)
|
71 |
+
- `title` - excercise's title (`string`)
|
72 |
+
- `question` - question's HTML (`string`)
|
73 |
+
- `question_type` - type of question, differences between types is described below (`string`).
|
74 |
+
|
75 |
+
These are the question types:
|
76 |
+
- `single_choice` and `multiple_choice`
|
77 |
+
A task that requires you to choose one or multiple answers to a question.
|
78 |
+
- `choices` - a list of possible choices
|
79 |
+
- `id` - choice ID (`string`)
|
80 |
+
- `html` - choice's HTML (`string`)
|
81 |
+
- `correct` - indicates if the choice is correct (`bool`)
|
82 |
+
- `text_entry`
|
83 |
+
Requires you to fill in the blanks by typing in your answer
|
84 |
+
- `text` - the text with blanks (in HTML format) (`string`)
|
85 |
+
- `answers` - list of strings with correct answers that you need to fill in (`list[string]`)
|
86 |
+
- `gap_match_text`
|
87 |
+
Requires you to fill in the gaps in text by choosing from a predefined list of answers.
|
88 |
+
- `text` - the text with gaps (in HTML format) (`string`)
|
89 |
+
- `choices` - list of possible answers you could choose from to fill the gaps with (plaintext) (`list[string]`)
|
90 |
+
- `answers` - list of correct answers (plaintext) (`list[string]`)
|
91 |
+
- `gap_match_color`
|
92 |
+
Requires you to paint different parts of text with colors
|
93 |
+
- `text` - the text that you need to paint (in HTML format) (`string`)
|
94 |
+
- `answers` - list of text part's IDs and their respectable color
|
95 |
+
- `two_sets_association`
|
96 |
+
Requires you to split a list of items into pairs
|
97 |
+
- `choices` - list of all items (in HTML format) (`list[string]`)
|
98 |
+
- `pairs` - list of items split into pairs (in HTML format) (`list[string]`)
|
99 |
+
- `inline_choice`
|
100 |
+
Requires you to fill in the blanks by choosing from a predefined set of answers
|
101 |
+
- `choices` - list of lists of possible choices (plaintext) (`list[list[string]]`)
|
102 |
+
- `answers` - list of correct answers (plaintext) (`list[string]`)
|
103 |
+
- `order`
|
104 |
+
Requires you to put items in the correct order. Could contain multiple tasks, hence the list nesting.
|
105 |
+
- `choices` - list of lists of scrabled items (in HTML format) (`list[list[string]]`)
|
106 |
+
- `answers` - list of lists of correctly ordered items (in HTML format) (`list[list[string]]`)
|
107 |
+
- `gap_match_table`
|
108 |
+
Requires you to split items into two groups.
|
109 |
+
- `columns` - group names (plaintext) (`list[string]`)
|
110 |
+
- `answers` - list of lists of items in each column (in HTML format) (`list[list[string]]`)
|
111 |
+
|
112 |
+
|
113 |
+
|
114 |
+
### Data Splits
|
115 |
+
|
116 |
+
There are 7260 lessons, all in the train split.
|
117 |
+
|
118 |
+
## Dataset Creation
|
119 |
+
|
120 |
+
The data was scraped using a script located in [my GitHub repository](https://github.com/its5Q/resh-edu).
|
121 |
+
|
122 |
+
## Additional Information
|
123 |
+
|
124 |
+
### Dataset Curators
|
125 |
+
|
126 |
+
- https://github.com/its5Q
|
127 |
+
|
128 |
+
### Licensing information
|
129 |
+
|
130 |
+
The lessons provided on [resh.edu.ru](https://resh.edu.ru/) are not protected by copyright and can be reused and distributed freely, as per Article 43 of the Constitution of the Russian Federation, and the Federal Law No. 273 of 29 December 2012 on Education.
|
raw.jsonl.gz
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:739a9f2c9e981391b9841c2b0ddefda9047c6c8d4be85ddb901c3ed256198a80
|
3 |
+
size 199235657
|
resh-edu.py
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""resh.edu.ru lessons dataset"""
|
2 |
+
|
3 |
+
|
4 |
+
import json
|
5 |
+
|
6 |
+
import datasets
|
7 |
+
|
8 |
+
_DESCRIPTION = """\
|
9 |
+
This is a dataset of lessons and tests scraped from resh.edu.ru
|
10 |
+
"""
|
11 |
+
|
12 |
+
_HOMEPAGE = "https://huggingface.co/datasets/its5Q/resh-edu"
|
13 |
+
|
14 |
+
_LICENSE = "cc0-1.0"
|
15 |
+
|
16 |
+
_URLS = [
|
17 |
+
"https://huggingface.co/datasets/its5Q/resh-edu/resolve/main/raw.jsonl.gz"
|
18 |
+
]
|
19 |
+
|
20 |
+
|
21 |
+
class ReshEdu(datasets.GeneratorBasedBuilder):
|
22 |
+
VERSION = datasets.Version("0.1.0")
|
23 |
+
|
24 |
+
def _info(self):
|
25 |
+
return datasets.DatasetInfo(
|
26 |
+
description=_DESCRIPTION,
|
27 |
+
homepage=_HOMEPAGE,
|
28 |
+
license=_LICENSE
|
29 |
+
)
|
30 |
+
|
31 |
+
def _split_generators(self, dl_manager):
|
32 |
+
data_dir = dl_manager.download_and_extract(_URLS)
|
33 |
+
return [
|
34 |
+
datasets.SplitGenerator(
|
35 |
+
name=datasets.Split.TRAIN,
|
36 |
+
gen_kwargs={
|
37 |
+
"filepath": data_dir[0],
|
38 |
+
"split": "train",
|
39 |
+
},
|
40 |
+
)
|
41 |
+
]
|
42 |
+
|
43 |
+
def _generate_examples(self, filepath, split):
|
44 |
+
with open(filepath, 'r', encoding="utf-8") as f:
|
45 |
+
for i, line in enumerate(f):
|
46 |
+
data = json.loads(line)
|
47 |
+
yield i, data
|
48 |
+
|