Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -7,4 +7,33 @@ language:
|
|
7 |
- ja
|
8 |
size_categories:
|
9 |
- 10K<n<100K
|
10 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
- ja
|
8 |
size_categories:
|
9 |
- 10K<n<100K
|
10 |
+
---
|
11 |
+
|
12 |
+
# Overview
|
13 |
+
This dataset provides a convenient and user-friendly format of data from Aozora Bunko (青空文庫), a website that compiles public-domain books in Japan, ideal for Machine Learning applications.
|
14 |
+
|
15 |
+
# Methodology
|
16 |
+
|
17 |
+
## 1. Data collection
|
18 |
+
We firstly downloaded the [CSV file that lists all works](https://www.aozora.gr.jp/index_pages/person_all.html). The information extracted from this CSV is incorporated into the `meta` field.
|
19 |
+
Next, we filtered out any books not categorized as public domain.
|
20 |
+
We retrieved the main text of each book corresponding to every row in the CSV and incorporated it into the `text` field.
|
21 |
+
|
22 |
+
## 2. Deduplication
|
23 |
+
We removed entries where the `図書カードURL` (Library card URL) in this CSV did not coincide with the `作品ID` (Work ID) and `人物ID` (Person ID).
|
24 |
+
Even in cases of match, rows with text identical to previously encountered text were discarded.
|
25 |
+
|
26 |
+
## 3. Cleaning
|
27 |
+
The data in the `text` field was then cleaned in the following sequence:
|
28 |
+
|
29 |
+
1. Convert new lines to `\n`
|
30 |
+
2. Remove headers
|
31 |
+
3. Remove footnotes and add them to the `footnote` field
|
32 |
+
4. Remove ruby (phonetic guides)
|
33 |
+
5. Convert specific characters, such as foreign characters and iteration marks, into standard Unicode characters
|
34 |
+
6. Convert inserted notes into regular parenthetical text
|
35 |
+
7. Remove any remaining markup
|
36 |
+
8. Remove leading and trailing whitespace and horizontal rules
|
37 |
+
|
38 |
+
# License
|
39 |
+
CC BY 4.0
|