|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- question-answering |
|
language: |
|
- es |
|
tags: |
|
- computational linguistics |
|
- spanish |
|
- NLP |
|
- json |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
<!-- |
|
Esta plantilla de Dataset Card es una adaptación de la de Hugging Face: https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md |
|
|
|
¿Cómo utilizar esta plantilla? Copia el contenido en el README.md del repo de tu dataset en el Hub de Hugging Face y rellena cada sección. |
|
|
|
Para más información sobre cómo rellenar cada sección ver las docs: https://huggingface.co/docs/hub/datasets-cards y https://huggingface.co/docs/datasets/dataset_card |
|
|
|
Para más información sobre la dataset card metadata ver: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1 |
|
--> |
|
|
|
# Dataset Card for LingComp_QA, un corpus educativo de lingüística computacional en español |
|
|
|
<!-- Suele haber un nombre corto ("pretty name") para las URLs, tablas y demás y uno largo más descriptivo. Para crear el pretty name podéis utilizar acrónimos. --> |
|
|
|
<!-- Resumen del corpus y motivación del proyecto (inc. los ODS relacionados). Esta sección es como el abstract. También se puede incluir aquí el logo del proyecto. --> |
|
|
|
<!-- Si queréis incluir una versión de la Dataset Card en español, enlazarla aquí al principio (e.g. `README_es.md`).--> |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
<!-- Resumen del dataset. --> |
|
|
|
- **Curated by:** [Jorge Zamora Rey](https://huggingface.co/reddrex), [Isabel Moyano Moreno](https://huggingface.co/issyinthesky), [Mario Crespo Miguel](https://huggingface.co/MCMiguel) <!-- Nombre de los miembros del equipo --> |
|
- **Funded by:** SomosNLP, HuggingFace, Argilla, Instituto de Lingüística Aplicada de la Universidad de Cádiz <!-- Si contasteis con apoyo de otra entidad (e.g. vuestra universidad), añadidla aquí --> |
|
- **Language(s) (NLP):** es-ES <!-- Enumerar las lenguas en las que se ha entrenado el modelo, especificando el país de origen. Utilizar códigos ISO. Por ejemplo: Spanish (`es-CL`, `es-ES`, `es-MX`), Catalan (`ca`), Quechua (`qu`). --> |
|
- **License:** apache-2.0 <!-- Elegid una licencia lo más permisiva posible teniendo en cuenta la licencia del model pre-entrenado y los datasets utilizados --> |
|
|
|
### Dataset Sources |
|
|
|
- **Repository:** https://github.com/reddrex/lingcomp_QA/tree/main <!-- Enlace al `main` del repo donde tengáis los scripts, i.e.: o del mismo repo del dataset en HuggingFace o a GitHub. --> |
|
- **Paper:** Comming soon! <!-- Si vais a presentarlo a NAACL poned "WIP", "Comming soon!" o similar. Si no tenéis intención de presentarlo a ninguna conferencia ni escribir un preprint, eliminar. --> |
|
|
|
<!-- ### Dataset Versions & Formats [optional] --> |
|
|
|
<!-- Si tenéis varias versiones de vuestro dataset podéis combinarlas todas en un mismo repo y simplemente enlazar aquí los commits correspondientes. Ver ejemplo de https://huggingface.co/bertin-project/bertin-roberta-base-spanish --> |
|
|
|
<!-- Si hay varias formatos del dataset (e.g. sin anotar, pregunta/respuesta, gemma) las podéis enumerar aquí. --> |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
This dataset is intended for educational purposes. When we further develop this resource, we would like it to serve as learning resource for NLP and Computational Linguistics beginners - either for tests, looking up the answers to common questions or studying key concepts and methodologies of CL. |
|
|
|
### Direct Use |
|
|
|
<!-- This section describes suitable use cases for the dataset. --> |
|
|
|
There is no specific use case intended for this dataset. However, we would like to develop a conversational language model that answers questions on Computational Linguistics. |
|
The dataset could be used to develop other educational tools or resources too, such as interactive quizzes, tutorials, and study materials, to help students learn about computational linguistics concepts, methodologies, and applications. |
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> |
|
|
|
The dataset is specifically designed for tasks related to computational linguistics, language processing, and natural language understanding. Therefore, using the dataset for unrelated tasks, such as image processing or numerical analysis, would be considered out of scope. |
|
Other out of scope uses would be using this dataset for product development or marketing, any commercial use really, as it is intended for research and educational purposes. |
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
<!-- En esta sección podéis enumerar y explicar cada columna del corpus. Para cada columna que sea de tipo "categoría" podéis indicar el porcentaje de ejemplos. --> |
|
|
|
The dataset's structure looks like this: |
|
|
|
```json |
|
[ |
|
{ |
|
"pregunta": "¿Qué implica la lingüística computacional teórica?", |
|
"respuesta": "La lingüística computacional teórica incluye el desarrollo de teorías formales de gramática y semántica, basadas en lógicas formales o enfoques simbólicos. Las áreas de estudio teórico en este ámbito incluyen la complejidad computacional y la semántica computacional." |
|
}, |
|
{ |
|
"pregunta": "¿Qué es una gramática libre de contexto?", |
|
"respuesta": "Una gramática libre de contexto es una gramática formal en la que cada regla de producción es de la forma V → w, donde V es un símbolo no terminal y w es una cadena de terminales y/o no terminales." |
|
}, |
|
{ |
|
"pregunta": "¿Qué es el algoritmo CYK y cuál es su propósito?", |
|
"respuesta": "El algoritmo de Cocke-Younger-Kasami (CYK) es un algoritmo de análisis sintáctico ascendente que determina si una cadena puede ser generada por una gramática libre de contexto y, en caso afirmativo, cómo puede ser generada. Su propósito es realizar un análisis sintáctico de la cadena para determinar su estructura gramatical." |
|
}, |
|
{...} |
|
] |
|
``` |
|
We have a "pregunta" or question column, and a "respuesta" or answer column, where each question has an answer associated. |
|
The themes (in Spanish) covered by this dataset are the following: |
|
- Algoritmos y formalismos |
|
- Lenguaje de programación |
|
- CPU/GPU |
|
- Entornos como colaboratory o jupyter |
|
- Python: tipos de datos, funciones built-in, métodos, programación orientada a objetos, comprensión de listas, etc. |
|
- NLTK |
|
- SpaCy |
|
- Historia y evolución del PLN |
|
- PLN/Lingüística computacional (sintaxis y semántica computacional, diferencias, conceptos...) |
|
- Lingüística |
|
- Recursos como FrameNet, WordNet, Treebank, Corpus Brown, ontologías |
|
- Lingüística de corpus: concordancias, colocaciones, cuestiones de estadística (chi-cuadrado, log-likelihood, datos, muestreo...) |
|
|
|
## Dataset Creation |
|
|
|
### Curation Rationale |
|
|
|
<!-- Motivation for the creation of this dataset. --> |
|
|
|
The lack of NLP educational resources meant for linguists, especially in Spanish, drove us to make a first attempt of collecting information on this topic from open internet sources. We aim to grow the corpus and create a a foundational resource for teaching linguists (and other beginners) about the principles, techniques, and applications of computational linguistics and NLP. |
|
|
|
### Source Data |
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
Blogs, wikipedia articles and our Computational Linguistics and Language Engineering course materials at the University of Cádiz comprise the source data for this dataset. |
|
|
|
#### Data Collection and Processing |
|
|
|
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> |
|
|
|
<!-- Enlazar aquí los scripts y notebooks utilizados para generar el corpus. --> |
|
First, we collected information on different aspects of Computational Linguistics (statistics, computer science, linguistics, corpus linguistics, etc.) from open blogs and webpages with [Bootcat](https://bootcat.dipintra.it). |
|
After this, we manually extracted information and created questions for each information segment. Then all three revised the whole document, deleted duplicated questions from each member's corpus portion and checked expressions. |
|
We also tried to make some explanations easier to comprehend for a broader audience, as the purpose of this project is mainly educational. |
|
|
|
Here we link the scripts used in the creation of the corpus, which are the following: |
|
|
|
- https://github.com/reddrex/lingcomp_QA/blob/main/dataset/Creación_archivo_JSON_a_partir_de_txt.ipynb |
|
- https://github.com/reddrex/lingcomp_QA/blob/main/train%20and%20test%20split/Train_and_test_LingComp_QA_split.ipynb |
|
|
|
#### Who are the source data producers? |
|
|
|
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> |
|
Our team members manually checked and organized the information into sets of questions and answers, while rewriting some of the info in a more suitable style for learners. |
|
|
|
<!-- ### Annotations [optional] --> |
|
|
|
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> |
|
|
|
#### Annotation process |
|
|
|
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> |
|
|
|
<!-- Enlazar aquí el notebook utilizado para crear el espacio de anotación de Argilla y la guía de anotación. --> |
|
|
|
We manually sorted the information into question-answer pairs. However, we did use the following Colaboratory notebook to create the JSON file: |
|
- https://github.com/reddrex/lingcomp_QA/blob/main/dataset/Creación_archivo_JSON_a_partir_de_txt.ipynb |
|
|
|
#### Who are the annotators? |
|
|
|
<!-- This section describes the people or systems who created the annotations. --> |
|
The annotators are the members of our team: Jorge Zamora, Isabel Moyano and Mario Crespo. |
|
|
|
#### Personal and Sensitive Information |
|
|
|
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> |
|
|
|
There are no personal, sensitive or private data that should not be shown in the dataset. The only names and dates that appear in it are those of the scientists, programmes and core dates in the development of the Artificial Intelligence area NLP. |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
<!-- Aquí podéis mencionar los posibles sesgos heredados según el origen de los datos y de las personas que lo han anotado, hablar del balance de las categorías representadas, los esfuerzos que habéis hecho para intentar mitigar sesgos y riesgos. --> |
|
|
|
Main bias might belong to the sources from which we extracted the information. Some blogs or wikipedia articles might employ different terminology for the same concept (and while we have tried to correct this, some terms could have escaped our supervisors). |
|
Also, the low availability of information on Computational Linguistics and NLP on Spanish on the Internet may have created an imbalance on topics tackled by the dataset. For example, there could be more information on Python usage than NLTK, or more on NLTK than Spacy, as it happens. |
|
Among our future plans there is balancing the topics out by translating from English sources. Plus, we would like to add QA pairs that might not appear in any relevant open info source and that we believe would be good for learners - mostly from our experience in the Linguistics and Applied Languages bachelor, although we are open to requests. |
|
|
|
The limitations we found while building the dataset are mostly time-related, as such a broad topic can be difficult to cover in such a limited amount of time. Furthermore, we found ourselves unable to fully balance the coverage of all the involved themes, as there were not enough information sources on the internet - plus, open to the public - that we could use in order to document our QA pairs. |
|
|
|
### Recommendations |
|
|
|
Users should be made aware of the risks, biases and limitations of the dataset. |
|
We recommend checking the dataset from time to time or checking our social media/contacting us via email. There we will be announcing whether a new version with more information and a broader range of sources will be launched and when. |
|
|
|
## License |
|
|
|
<!-- Indicar bajo qué licencia se libera el dataset explicando, si no es apache 2.0, a qué se debe la licencia más restrictiva (i.e. herencia de los datos utilizados). --> |
|
Apache 2.0 |
|
|
|
## Citation |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
``` |
|
@software{LingComp_QA, |
|
author = {Zamora Rey, Jorge and Crespo Miguel, Mario and Moyano Moreno, Isabel}, |
|
title = {LingComp_QA, un corpus educativo de lingüística computacional en español}, |
|
month = March, |
|
year = 2024, |
|
url = {https://huggingface.co/datasets/somosnlp/LingComp_QA} |
|
} |
|
``` |
|
|
|
<!-- ## Glossary [optional] --> |
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> |
|
|
|
## More Information |
|
|
|
<!-- Indicar aquí que el marco en el que se desarrolló el proyecto, en esta sección podéis incluir agradecimientos y más información sobre los miembros del equipo. Podéis adaptar el ejemplo a vuestro gusto. --> |
|
|
|
This project was developed during the [Hackathon #Somos600M](https://somosnlp.org/hackathon) organized by SomosNLP. The dataset was created using `distilabel` by Argilla and endpoints sponsored by HuggingFace. |
|
|
|
**Team:** |
|
|
|
- [Jorge Zamora Rey](https://huggingface.co/reddrex) |
|
- [Mario Crespo Miguel](https://huggingface.co/MCMiguel) |
|
- [Isabel Moyano Moreno](https://huggingface.co/issyinthesky) |
|
|
|
## Contact |
|
|
|
<!-- Email de contacto para posibles preguntas sobre el dataset. --> |
|
mario.crespo@uca.es |
|
isabel.moyano@uca.es |