Manyah's picture
all_wiki->all_wikis
476c71f verified
|
raw
history blame
8.26 kB
metadata
annotations_creators:
  - no-annotation
language_creators:
  - expert-generated
language:
  - es
license:
  - mit
multilinguality:
  - monolingual
size_categories:
  - 100K<n<1M
  - 100M<n<1B
  - 10K<n<100K
  - 10M<n<100M
  - 1M<n<10M
source_datasets:
  - original
task_categories:
  - other
task_ids: []
paperswithcode_id: null
pretty_name: The Large Spanish Corpus
tags: []
dataset_info:
  - config_name: JRC
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 380895504
        num_examples: 3410620
    download_size: 4099166669
    dataset_size: 380895504
  - config_name: EMEA
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 100259598
        num_examples: 1221233
    download_size: 4099166669
    dataset_size: 100259598
  - config_name: GlobalVoices
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 114435784
        num_examples: 897075
    download_size: 4099166669
    dataset_size: 114435784
  - config_name: ECB
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 336285757
        num_examples: 1875738
    download_size: 4099166669
    dataset_size: 336285757
  - config_name: DOGC
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 898279656
        num_examples: 10917053
    download_size: 4099166669
    dataset_size: 898279656
  - config_name: all_wikis
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 3782280549
        num_examples: 28109484
    download_size: 4099166669
    dataset_size: 3782280549
  - config_name: TED
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 15858148
        num_examples: 157910
    download_size: 4099166669
    dataset_size: 15858148
  - config_name: multiUN
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 2327269369
        num_examples: 13127490
    download_size: 4099166669
    dataset_size: 2327269369
  - config_name: Europarl
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 359897865
        num_examples: 2174141
    download_size: 4099166669
    dataset_size: 359897865
  - config_name: NewsCommentary11
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 48350573
        num_examples: 288771
    download_size: 4099166669
    dataset_size: 48350573
  - config_name: UN
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 23654590
        num_examples: 74067
    download_size: 4099166669
    dataset_size: 23654590
  - config_name: EUBookShop
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 1326861077
        num_examples: 8214959
    download_size: 4099166669
    dataset_size: 1326861077
  - config_name: ParaCrawl
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 1840430234
        num_examples: 15510649
    download_size: 4099166669
    dataset_size: 1840430234
  - config_name: OpenSubtitles2018
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 7477281776
        num_examples: 213508602
    download_size: 4099166669
    dataset_size: 7477281776
  - config_name: DGT
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 396217351
        num_examples: 3168368
    download_size: 4099166669
    dataset_size: 396217351
  - config_name: combined
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 19428257807
        num_examples: 302656160
    download_size: 4099166669
    dataset_size: 19428257807
config_names:
  - DGT
  - DOGC
  - ECB
  - EMEA
  - EUBookShop
  - Europarl
  - GlobalVoices
  - JRC
  - NewsCommentary11
  - OpenSubtitles2018
  - ParaCrawl
  - TED
  - UN
  - all_wikis
  - combined
  - multiUN

Dataset Card for The Large Spanish Corpus

Table of Contents

Dataset Description

Dataset Summary

The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, all_wiki only includes examples from Spanish Wikipedia:

from datasets import load_dataset
all_wiki = load_dataset('large_spanish_corpus', name='all_wikis')

By default, the config is set to "combined" which loads all the corpora.

Supported Tasks and Leaderboards

[More Information Needed]

Languages

Spanish

Dataset Structure

Data Instances

[More Information Needed]

Data Fields

[More Information Needed]

Data Splits

The following is taken from the corpus' source repsository:

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

[More Information Needed]

Contributions

Thanks to @lewtun for adding this dataset.