system's picture
system HF staff
Update files from the datasets library (from 1.18.0)
c61c180
|
raw
history blame
5.49 kB
metadata
annotations_creators:
  - no-annotation
language_creators:
  - expert-generated
languages:
  - es
licenses:
  DGT:
    - mit
  DOGC:
    - mit
  ECB:
    - mit
  EMEA:
    - mit
  EUBookShop:
    - mit
  Europarl:
    - mit
  GlobalVoices:
    - mit
  JRC:
    - mit
  NewsCommentary11:
    - mit
  OpenSubtitles2018:
    - mit
  ParaCrawl:
    - mit
  TED:
    - mit
  UN:
    - mit
  all_wikis:
    - mit
  combined:
    - mit
  multiUN:
    - mit
multilinguality:
  - monolingual
size_categories:
  DGT:
    - 1M<n<10M
  DOGC:
    - 10M<n<100M
  ECB:
    - 1M<n<10M
  EMEA:
    - 1M<n<10M
  EUBookShop:
    - 1M<n<10M
  Europarl:
    - 1M<n<10M
  GlobalVoices:
    - 100K<n<1M
  JRC:
    - 1M<n<10M
  NewsCommentary11:
    - 100K<n<1M
  OpenSubtitles2018:
    - 100M<n<1B
  ParaCrawl:
    - 10M<n<100M
  TED:
    - 100K<n<1M
  UN:
    - 10K<n<100K
  all_wikis:
    - 10M<n<100M
  combined:
    - 100M<n<1B
  multiUN:
    - 10M<n<100M
source_datasets:
  - original
task_categories:
  - other
task_ids:
  - other-other-pretraining-language-models
paperswithcode_id: null
pretty_name: The Large Spanish Corpus

Dataset Card for The Large Spanish Corpus

Table of Contents

Dataset Description

Dataset Summary

The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, all_wiki only includes examples from Spanish Wikipedia:

from datasets import load_dataset
all_wiki = load_dataset('large_spanish_corpus', name='all_wiki')

By default, the config is set to "combined" which loads all the corpora.

Supported Tasks and Leaderboards

[More Information Needed]

Languages

Spanish

Dataset Structure

Data Instances

[More Information Needed]

Data Fields

[More Information Needed]

Data Splits

The following is taken from the corpus' source repsository:

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

[More Information Needed]

Contributions

Thanks to @lewtun for adding this dataset.