Updated description of the dataset format and content
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ license: cc-by-nc-sa-4.0
|
|
21 |
|
22 |
### Dataset Summary
|
23 |
|
24 |
-
The CA-FR Parallel Corpus is a Catalan-French dataset of
|
25 |
Machine Translation.
|
26 |
|
27 |
### Supported Tasks and Leaderboards
|
@@ -37,11 +37,9 @@ The sentences included in the dataset are in Catalan (CA) and French (FR).
|
|
37 |
|
38 |
### Data Instances
|
39 |
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
- ca-fr_corpus.fr: contains 18.634.844 French sentences.
|
45 |
|
46 |
### Data Fields
|
47 |
|
@@ -61,23 +59,12 @@ This dataset is aimed at promoting the development of Machine Translation betwee
|
|
61 |
|
62 |
#### Initial Data Collection and Normalization
|
63 |
|
64 |
-
The
|
65 |
-
|
66 |
-
| Dataset | Sentences |
|
67 |
-
|:----------------|----------:|
|
68 |
-
| CCMatrix | 16.305.758|
|
69 |
-
| Multi CCAligned | 1.442.584|
|
70 |
-
| WikiMatrix | 437.665|
|
71 |
-
| GNOME | 1.686|
|
72 |
-
| KDE 4 | 111.750|
|
73 |
-
| Open Subtitles | 225.786|
|
74 |
-
|
75 |
-
|
76 |
-
All corpora were collected from [Opus](https://opus.nlpl.eu/).
|
77 |
|
78 |
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
|
79 |
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
|
80 |
-
The filtered datasets are then concatenated to form
|
81 |
|
82 |
#### Who are the source language producers?
|
83 |
|
|
|
21 |
|
22 |
### Dataset Summary
|
23 |
|
24 |
+
The CA-FR Parallel Corpus is a Catalan-French dataset of parallel sentences created to support Catalan in NLP tasks, specifically
|
25 |
Machine Translation.
|
26 |
|
27 |
### Supported Tasks and Leaderboards
|
|
|
37 |
|
38 |
### Data Instances
|
39 |
|
40 |
+
A single tsv file is provided with the sentences sorted in the same order and
|
41 |
+
a header containing the two-letter ISO language code for the language in each column:
|
42 |
+
ca-fr_corpus.tsv.
|
|
|
|
|
43 |
|
44 |
### Data Fields
|
45 |
|
|
|
59 |
|
60 |
#### Initial Data Collection and Normalization
|
61 |
|
62 |
+
The corpus is a combination of the following original datasets collected from [Opus](https://opus.nlpl.eu/):
|
63 |
+
CCMatrix, Multi CCAligned, WikiMatrix, GNOME, KDE 4, Open Subtitles.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
|
66 |
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
|
67 |
+
The filtered datasets are then concatenated to form the final corpus.
|
68 |
|
69 |
#### Who are the source language producers?
|
70 |
|