Datasets:

Modalities:
Text
Formats:
json
Languages:
Spanish
DOI:
Libraries:
Datasets
pandas
License:
reddrex commited on
Commit
097cccd
1 Parent(s): f45a7ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -5
README.md CHANGED
@@ -36,8 +36,8 @@ Para más información sobre la dataset card metadata ver: https://github.com/hu
36
 
37
  <!-- Resumen del dataset. -->
38
 
39
- - **Curated by:** Jorge Zamora Rey (https://huggingface.co/reddrex), Isabel Moyano Moreno (https://huggingface.co/issyinthesky), Mario Crespo Miguel (https://huggingface.co/MCMiguel) <!-- Nombre de los miembros del equipo -->
40
- - **Funded by:** SomosNLP, HuggingFace, Argilla, Universidad de Cádiz <!-- Si contasteis con apoyo de otra entidad (e.g. vuestra universidad), añadidla aquí -->
41
  - **Language(s) (NLP):** es-ES <!-- Enumerar las lenguas en las que se ha entrenado el modelo, especificando el país de origen. Utilizar códigos ISO. Por ejemplo: Spanish (`es-CL`, `es-ES`, `es-MX`), Catalan (`ca`), Quechua (`qu`). -->
42
  - **License:** apache-2.0 <!-- Elegid una licencia lo más permisiva posible teniendo en cuenta la licencia del model pre-entrenado y los datasets utilizados -->
43
 
@@ -61,7 +61,6 @@ This dataset is intended for educational purposes.
61
 
62
  <!-- This section describes suitable use cases for the dataset. -->
63
 
64
-
65
  [More Information Needed]
66
 
67
  ### Out-of-Scope Use
@@ -114,14 +113,19 @@ We have a "pregunta" or question column, and a "respuesta" or answer column, whe
114
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
115
 
116
  <!-- Enlazar aquí los scripts y notebooks utilizados para generar el corpus. -->
 
 
 
117
 
118
- [More Information Needed]
 
 
 
119
 
120
  #### Who are the source data producers?
121
 
122
  <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
123
 
124
- [More Information Needed]
125
 
126
  ### Annotations [optional]
127
 
 
36
 
37
  <!-- Resumen del dataset. -->
38
 
39
+ - **Curated by:** [Jorge Zamora Rey](https://huggingface.co/reddrex), [Isabel Moyano Moreno](https://huggingface.co/issyinthesky), [Mario Crespo Miguel](https://huggingface.co/MCMiguel) <!-- Nombre de los miembros del equipo -->
40
+ - **Funded by:** SomosNLP, HuggingFace, Argilla, Instituto de Lingüística Aplicada de la Universidad de Cádiz <!-- Si contasteis con apoyo de otra entidad (e.g. vuestra universidad), añadidla aquí -->
41
  - **Language(s) (NLP):** es-ES <!-- Enumerar las lenguas en las que se ha entrenado el modelo, especificando el país de origen. Utilizar códigos ISO. Por ejemplo: Spanish (`es-CL`, `es-ES`, `es-MX`), Catalan (`ca`), Quechua (`qu`). -->
42
  - **License:** apache-2.0 <!-- Elegid una licencia lo más permisiva posible teniendo en cuenta la licencia del model pre-entrenado y los datasets utilizados -->
43
 
 
61
 
62
  <!-- This section describes suitable use cases for the dataset. -->
63
 
 
64
  [More Information Needed]
65
 
66
  ### Out-of-Scope Use
 
113
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
114
 
115
  <!-- Enlazar aquí los scripts y notebooks utilizados para generar el corpus. -->
116
+ First, we collected information on different aspects of Computational Linguistics (statistics, computer science, linguistics, corpus linguistics, etc.) from open blogs and webpages with [Bootcat](https://bootcat.dipintra.it).
117
+ After this, we manually extracted information and created questions for each information segment. Then all three revised the whole document, deleted duplicated questions from each member's corpus portion and checked expressions.
118
+ We also tried to make some explanations easier to comprehend for a broader audience, as the purpose of this project is mainly educational.
119
 
120
+ Here we link the scripts used in the creation of the corpus, which are the following:
121
+
122
+ - https://github.com/reddrex/lingcomp_QA/blob/main/dataset/Creación_archivo_JSON_a_partir_de_txt.ipynb
123
+ - https://github.com/reddrex/lingcomp_QA/blob/main/train%20and%20test%20split/Train_and_test_LingComp_QA_split.ipynb
124
 
125
  #### Who are the source data producers?
126
 
127
  <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
128
 
 
129
 
130
  ### Annotations [optional]
131