File size: 3,032 Bytes
294ea5b
 
021c00b
 
 
 
 
 
 
 
 
 
294ea5b
4b9805b
 
 
 
 
71944b9
d94be47
71944b9
4b9805b
 
 
 
 
 
 
 
 
 
d49b1a6
b1793fd
 
4b9805b
94a67f1
4b9805b
 
 
 
 
 
 
 
 
472942b
 
 
4b9805b
 
 
0033a50
 
 
 
4b9805b
 
94a67f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- fr
tags:
- wikipedia
- wiki
- fr.wikipedia.org
size_categories:
- 1M<n<10M
---

# French Wikipedia Dataset

## Overview

This dataset is a curated collection of approximately 1.1 million French Wikipedia articles, scraped directly from the [official French Wikipedia site](https://fr.wikipedia.org/) on September 24, 2023. 
There are already numerous datasets for Wikipedia, including the official one with [Wikipedia's dump](https://huggingface.co/datasets/wikipedia). Unfortunately, the text for the French version of this dataset is incomplete, lacking many elements like dates and locations.
As the saying goes, "garbage in, garbage out."

## Format

- **Type**: Text
- **File Extension**: `.txt`

## Structure

The dataset is divided into the following splits:

- `train.txt`: 3.45 GB - 1,810,000 rows - 90%
- `test.txt` : 192 MB  - 100,575 rows   - 5%
- `valid.txt`: 192 MB  - 100,575 rows   - 5%

Each article in the dataset exceeds 1400 characters in length.

## Data Cleaning and Preprocessing

The following elements have been excluded from the dataset:

- H1 - H4 Headings
- Lists
- Tables
- Sources and References
- Info box
- Banners
- LaTeX code

The text has been standardized for consistent formatting and line length. Additionally, the dataset has been filtered using the `langid` library to include only text in French. Some quotations or short terms in other languages, including non-Latin languages, may still be present.

## Exploring the Dataset

You can use the `explore_dataset.py` script to explore the dataset by randomly displaying a certain number of lines from it. The script creates and saves an index based on the line breaks, enabling faster data retrieval and display.

## Additional Information

This dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, as well as several hundred thousand Francophone news articles.


---

# WIKIPEDIA EXTRACT

Inside the `/extract_wiki/` directory, you'll find Python scripts used to extract text to compile this dataset.

## Requirements:
```python
pip install datasets aiohttp aiofiles beautifulsoup4 langid
```

## Scripts:

1. **1_extract_link.py**
   ```python
   python 1_extract_link.py
   ```
   Script to download the Wikipedia dataset from Hugging Face, extract URLs, and save them to a text file for further processing.

2. **2_extract_content.py**
   ```python
   python 2_extract_content.py
   ```
   This script retrieves the source code of Wikipedia pages based on URLs found in a text file. Instead of saving the entire HTML of the page, it trims the content, focusing on the main article section, thereby limiting the size of each record.

3. **3_extract_txt.py**
   ```python
   python 3_extract_txt.py
   ```
   This script extracts the text from the HTML pages and conducts tests to filter the content that should be retained or excluded. This includes language checks, special characters, numbers, etc.