wikipedia_fr / README.md
eckendoerffer's picture
Update README.md
472942b
|
raw
history blame
No virus
1.81 kB
metadata
license: cc-by-sa-3.0
task_categories:
  - text-generation
language:
  - fr
tags:
  - wikipedia
  - wiki
  - fr.wikipedia.org
size_categories:
  - 1M<n<10M

French Wikipedia Dataset

Overview

This dataset is a curated collection of approximately 1.1 million French Wikipedia articles, scraped directly from the official French Wikipedia site on September 24, 2023. It aims to address issues with existing Wikipedia datasets, such as poorly parsed text missing essential information like dates and locations.

Format

  • Type: Text
  • File Extension: .txt

Structure

The dataset is divided into the following splits:

  • train.txt: 3.48 GB - 1,810,000 rows - 90%
  • test.txt : 194 MB - 100,500 rows - 5%
  • valid.txt: 194 MB - 100,500 rows - 5%

Each article in the dataset exceeds 1200 characters in length.

Data Cleaning and Preprocessing

The following elements have been excluded from the dataset:

  • H1 - H4 Headings
  • Lists
  • Tables
  • Sources and References
  • Info box
  • Banners
  • LaTeX code

The text has been standardized for consistent formatting and line length. Additionally, the dataset has been filtered using the langid library to include only text in French. Some quotations or short terms in other languages, including non-Latin languages, may still be present.

Exploring the Dataset

You can use the explore_dataset.py script to explore the dataset by randomly displaying a certain number of lines from it. The script creates and saves an index based on the line breaks, enabling faster data retrieval and display.

Additional Information

This dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, as well as several hundred thousand Francophone news articles.