|
--- |
|
license: cc-by-sa-3.0 |
|
task_categories: |
|
- text-generation |
|
language: |
|
- fr |
|
tags: |
|
- wikipedia |
|
- wiki |
|
- fr.wikipedia.org |
|
size_categories: |
|
- 1M<n<10M |
|
--- |
|
|
|
# French Wikipedia Dataset |
|
|
|
## Overview |
|
|
|
This dataset is a curated collection of approximately 1.1 million French Wikipedia articles, scraped directly from the [official French Wikipedia site](https://fr.wikipedia.org/) on September 24, 2023. |
|
There are already numerous datasets for Wikipedia, including the official one with [Wikipedia's dump](https://huggingface.co/datasets/wikipedia). Unfortunately, the text for the French version of this dataset is incomplete, lacking many elements like dates and locations. |
|
As the saying goes, "garbage in, garbage out." |
|
|
|
## Format |
|
|
|
- **Type**: Text |
|
- **File Extension**: `.txt` |
|
|
|
## Structure |
|
|
|
The dataset is divided into the following splits: |
|
|
|
- `train.txt`: 3.45 GB - 1,810,000 rows - 90% |
|
- `test.txt` : 192 MB - 100,575 rows - 5% |
|
- `valid.txt`: 192 MB - 100,575 rows - 5% |
|
|
|
Each article in the dataset exceeds 1400 characters in length. |
|
|
|
## Data Cleaning and Preprocessing |
|
|
|
The following elements have been excluded from the dataset: |
|
|
|
- H1 - H4 Headings |
|
- Lists |
|
- Tables |
|
- Sources and References |
|
- Info box |
|
- Banners |
|
- LaTeX code |
|
|
|
The text has been standardized for consistent formatting and line length. Additionally, the dataset has been filtered using the `langid` library to include only text in French. Some quotations or short terms in other languages, including non-Latin languages, may still be present. |
|
|
|
## Exploring the Dataset |
|
|
|
You can use the `explore_dataset.py` script to explore the dataset by randomly displaying a certain number of lines from it. The script creates and saves an index based on the line breaks, enabling faster data retrieval and display. |
|
|
|
## Additional Information |
|
|
|
This dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, as well as several hundred thousand Francophone news articles. |
|
|
|
|
|
--- |
|
|
|
# WIKIPEDIA EXTRACT |
|
|
|
Inside the `/extract_wiki/` directory, you'll find Python scripts used to extract text to compile this dataset. |
|
|
|
## Requirements: |
|
```python |
|
pip install datasets aiohttp aiofiles beautifulsoup4 langid |
|
``` |
|
|
|
## Scripts: |
|
|
|
1. **1_extract_link.py** |
|
```python |
|
python 1_extract_link.py |
|
``` |
|
Script to download the Wikipedia dataset from Hugging Face, extract URLs, and save them to a text file for further processing. |
|
|
|
2. **2_extract_content.py** |
|
```python |
|
python 2_extract_content.py |
|
``` |
|
This script retrieves the source code of Wikipedia pages based on URLs found in a text file. Instead of saving the entire HTML of the page, it trims the content, focusing on the main article section, thereby limiting the size of each record. |
|
|
|
3. **3_extract_txt.py** |
|
```python |
|
python 3_extract_txt.py |
|
``` |
|
This script extracts the text from the HTML pages and conducts tests to filter the content that should be retained or excluded. This includes language checks, special characters, numbers, etc. |
|
|
|
|