license: apache-2.0
task_categories:
- question-answering
language:
- es
tags:
- computational linguistics
- spanish
- NLP
- json
size_categories:
- 1K<n<10K
Dataset Card for LingComp_QA, un corpus educativo de lingüística computacional en español
Dataset Details
Dataset Description
- Curated by: Jorge Zamora Rey, Isabel Moyano Moreno, Mario Crespo Miguel
- Funded by: SomosNLP, HuggingFace, Argilla, Instituto de Lingüística Aplicada de la Universidad de Cádiz
- Language(s) (NLP): es-ES
- License: apache-2.0
Dataset Sources
- Repository: https://github.com/reddrex/lingcomp_QA/tree/main
- Paper: Comming soon!
Uses
This dataset is intended for educational purposes.
Direct Use
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Dataset Structure
The dataset's structure looks like this:
[
{
"pregunta": "¿Qué implica la lingüística computacional teórica?",
"respuesta": "La lingüística computacional teórica incluye el desarrollo de teorías formales de gramática y semántica, basadas en lógicas formales o enfoques simbólicos. Las áreas de estudio teórico en este ámbito incluyen la complejidad computacional y la semántica computacional."
},
{
"pregunta": "¿Qué es una gramática libre de contexto?",
"respuesta": "Una gramática libre de contexto es una gramática formal en la que cada regla de producción es de la forma V → w, donde V es un símbolo no terminal y w es una cadena de terminales y/o no terminales."
},
{
"pregunta": "¿Qué es el algoritmo CYK y cuál es su propósito?",
"respuesta": "El algoritmo de Cocke-Younger-Kasami (CYK) es un algoritmo de análisis sintáctico ascendente que determina si una cadena puede ser generada por una gramática libre de contexto y, en caso afirmativo, cómo puede ser generada. Su propósito es realizar un análisis sintáctico de la cadena para determinar su estructura gramatical."
},
{...}
]
We have a "pregunta" or question column, and a "respuesta" or answer column, where each question has an answer associated.
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Data Collection and Processing
First, we collected information on different aspects of Computational Linguistics (statistics, computer science, linguistics, corpus linguistics, etc.) from open blogs and webpages with Bootcat. After this, we manually extracted information and created questions for each information segment. Then all three revised the whole document, deleted duplicated questions from each member's corpus portion and checked expressions. We also tried to make some explanations easier to comprehend for a broader audience, as the purpose of this project is mainly educational.
Here we link the scripts used in the creation of the corpus, which are the following:
- https://github.com/reddrex/lingcomp_QA/blob/main/dataset/Creación_archivo_JSON_a_partir_de_txt.ipynb
- https://github.com/reddrex/lingcomp_QA/blob/main/train%20and%20test%20split/Train_and_test_LingComp_QA_split.ipynb
Who are the source data producers?
Annotations [optional]
Annotation process
[More Information Needed]
Who are the annotators?
The annotators are the members of our team: Jorge Zamora, Isabel Moyano and Mario Crespo.
Personal and Sensitive Information
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
[More Information Needed]
License
Apache 2.0
Citation
BibTeX:
@software{LingComp_QA,
author = {Zamora Rey, Jorge and Crespo Miguel, Mario and Moyano Moreno, Isabel},
title = {LingComp_QA, un corpus educativo de lingüística computacional en español},
month = March,
year = 2024,
url = {https://huggingface.co/datasets/somosnlp/LingComp_QA}
}
More Information
This project was developed during the Hackathon #Somos600M organized by SomosNLP. The dataset was created using distilabel
by Argilla and endpoints sponsored by HuggingFace.
Team: