|
--- |
|
license: cc-by-4.0 |
|
task_categories: |
|
- text-generation |
|
- text-classification |
|
language: |
|
- ja |
|
size_categories: |
|
- 10K<n<100K |
|
--- |
|
|
|
# Overview |
|
This dataset provides a convenient and user-friendly format of data from [Aozora Bunko (青空文庫)](https://www.aozora.gr.jp/), a website that compiles public-domain books in Japan, ideal for Machine Learning applications. |
|
|
|
# Methodology |
|
|
|
## 1. Data collection |
|
We firstly downloaded the [CSV file that lists all works](https://www.aozora.gr.jp/index_pages/person_all.html). The information extracted from this CSV is incorporated into the `meta` field. |
|
Next, we filtered out any books not categorized as public domain. |
|
We retrieved the main text of each book corresponding to every row in the CSV and incorporated it into the `text` field. |
|
|
|
## 2. Deduplication |
|
We removed entries where the `図書カードURL` (Library card URL) in this CSV did not coincide with the `作品ID` (Work ID) and `人物ID` (Person ID). |
|
In addition, rows with text identical to previously encountered text were discarded. |
|
|
|
## 3. Cleaning |
|
The data in the `text` field was then cleaned in the following sequence: |
|
|
|
1. Convert new lines to `\n` |
|
2. Remove headers |
|
3. Remove footnotes and add them to the `footnote` field |
|
4. Remove ruby (phonetic guides) |
|
5. Convert specific characters, such as external characters and iteration marks, into standard Unicode characters |
|
6. Convert inserted notes into regular parenthetical text |
|
7. Remove any remaining markup |
|
8. Remove leading and trailing whitespace and horizontal rules |
|
|
|
# License |
|
CC BY 4.0 |