aozorabunko-clean / README.md
akeyhero's picture
Update README.md
541e4cc
|
raw
history blame
2.95 kB
metadata
license: cc-by-4.0
task_categories:
  - text-generation
  - text-classification
language:
  - ja
size_categories:
  - 10K<n<100K

Overview

This dataset provides a convenient and user-friendly format of data from Aozora Bunko (青空文庫), a website that compiles public-domain books in Japan, ideal for Machine Learning applications.

[For Japanese] 日本語での概要説明を Qiita に記載しました: https://qiita.com/akeyhero/items/b53eae1c0bc4d54e321f

Methodology

The code to reproduce this dataset is made available on GitHub: globis-org/aozorabunko-exctractor.

1. Data collection

We firstly downloaded the CSV file that lists all works. The information extracted from this CSV is incorporated into the meta field. Next, we filtered out any books not categorized as public domain. We retrieved the main text of each book corresponding to every row in the CSV and incorporated it into the text field in UTF-8.

2. Deduplication

We removed entries where the 図書カードURL (Library card URL) in this CSV did not match with the 作品ID (Work ID) and 人物ID (Person ID). In addition, entries with text identical to previously encountered text were discarded.

3. Cleaning

The data in the text field was then cleaned in the following sequence:

  1. Convert new lines to \n
  2. Remove headers
  3. Remove footnotes and add them to the footnote field
  4. Convert inserted notes into regular parenthetical text
  5. Remove ruby (phonetic guides)
  6. Convert specific characters, such as external characters and iteration marks, into standard Unicode characters
  7. Remove any remaining markup
  8. Remove leading and trailing new lines and horizontal rules

Tips

If you prefer to employ only modern Japanese, you can filter entries with: row["meta"]["文字遣い種別"] == "新字新仮名".

Example

>>> from datasets import load_dataset
>>> ds = load_dataset('globis-university/aozorabunko-clean')
>>> ds
DatasetDict({
    train: Dataset({
        features: ['text', 'footnote', 'meta'],
        num_rows: 16951
    })
})
>>> ds = ds.filter(lambda row: row['meta']['文字遣い種別'] == '新字新仮名')  # only modern Japanese
>>> ds
DatasetDict({
    train: Dataset({
        features: ['text', 'footnote', 'meta'],
        num_rows: 10246
    })
})
>>> book = ds['train'][0]  # one of works
>>> book['meta']['作品名']
'ウェストミンスター寺院'
>>> text = book['text']  # main content
>>> len(text)
10639
>>> print(text[:100])
深いおどろきにうたれて、
名高いウェストミンスターに
真鍮や石の記念碑となって
すべての王侯貴族が集まっているのをみれば、
今はさげすみも、ほこりも、見栄もない。
善にかえった貴人の姿、
華美と俗世の

License

CC BY 4.0