Iker commited on
Commit
ae79f55
1 Parent(s): 6501985

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -0
README.md CHANGED
@@ -1,3 +1,106 @@
1
  ---
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - es
4
  license: apache-2.0
5
+ multilinguality:
6
+ - monolingual
7
+ size_categories:
8
+ - n<1K
9
+ source_datasets:
10
+ - original
11
+ task_categories:
12
+ - summarization
13
+ pretty_name: NoticIA Human Validation
14
+ dataset_info:
15
+ features:
16
+ - name: web_url
17
+ dtype: string
18
+ - name: web_headline
19
+ dtype: string
20
+ - name: summary
21
+ dtype: string
22
+ - name: web_text
23
+ dtype: string
24
+ - name: clean_web_text
25
+ dtype: string
26
+ splits:
27
+ - name: train
28
+ num_bytes: 3966804
29
+ num_examples: 700
30
+ - name: validation
31
+ num_bytes: 352363
32
+ num_examples: 50
33
+ - name: test
34
+ num_bytes: 588932
35
+ num_examples: 100
36
+ download_size: 2908498
37
+ dataset_size: 4908099
38
+ configs:
39
+ - config_name: default
40
+ data_files:
41
+ - split: test
42
+ path: test.jsonl
43
+ tags:
44
+ - summarization
45
+ - clickbait
46
+ - news
47
  ---
48
+
49
+ <p align="center">
50
+ <img src="https://huggingface.co/datasets/Iker/NoticIA/resolve/main/assets/logo.png" style="height: 250px;">
51
+ </p>
52
+
53
+ <h3 align="center">"A Spanish dataset for Clickbait articles summarization"</h3>
54
+
55
+ This repository contains the manual annotations from a second human in order to validate the test set of the NoticIA dataset.
56
+
57
+ The full NoticIA dataset is available here: [https://huggingface.co/datasets/Iker/NoticIA](https://huggingface.co/datasets/Iker/NoticIA)
58
+
59
+
60
+ # Data explanation
61
+ - **web_url** (int): The URL of the news article
62
+ - **web_headline** (str): The headline of the article, which is a Clickbait.
63
+ - **web_text** (int): The body of the article.
64
+ - **clean_web_text** (str): The `web_text` has been downloaded from the web HTML and can contain undesired text not related to the news article. The `clean_web_text` has been cleaned using the OpenAI gpt-3.5-turbo-0125 model. We ask the model to remove any sentence unrelated to the article.
65
+ - **summary** (str): The original summary in the NoticIA dataset.
66
+ - **summary2** (str): The second summary written by another human to validate the quality of `summary`
67
+ -
68
+
69
+ # Dataset Description
70
+ - **Curated by:** [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/), [Begoña Altura](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139)
71
+ - **Language(s) (NLP):** Spanish
72
+ - **License:** apache-2.0
73
+
74
+ # Dataset Usage
75
+
76
+ ```Python
77
+ from datasets import load_dataset
78
+ from evaluate import load
79
+
80
+ dataset = load_dataset("Iker/NoticIA_Human_Validation",split="test")
81
+ rouge = load("rouge")
82
+ results = rouge.compute(
83
+ predictions=[x["summary2"] for x in dataset],
84
+ references=[[x["summary"]] for x in dataset],
85
+ use_aggregator=True,
86
+ )
87
+ print(results)
88
+ ```
89
+
90
+
91
+ # Uses
92
+ This dataset is intended to build models tailored for academic research that can extract information from large texts. The objective is to research whether current LLMs, given a question formulated as a Clickbait headline, can locate the answer within the article body and summarize the information in a few words. The dataset also aims to serve as a task to evaluate the performance of current LLMs in Spanish.
93
+
94
+ # Out-of-Scope Use
95
+ You cannot use this dataset to develop systems that directly harm the newspapers included in the dataset. This includes using the dataset to train profit-oriented LLMs capable of generating articles from a short text or headline, as well as developing profit-oriented bots that automatically summarize articles without the permission of the article's owner. Additionally, you are not permitted to train a system with this dataset that generates clickbait headlines.
96
+
97
+ # Dataset Creation
98
+ The dataset has been meticulously created by hand. We utilize two sources to compile Clickbait articles:
99
+ - The Twitter user [@ahorrandoclick1](https://twitter.com/ahorrandoclick1), who reposts Clickbait articles along with a hand-crafted summary. Although we use their summaries as a reference, most of them have been rewritten (750 examples from this source).
100
+ - The web demo [⚔️ClickbaitFighter⚔️](https://iker-clickbaitfighter.hf.space/), which operates a pre-trained model using an early iteration of our dataset. We collect all the model inputs/outputs and manually correct them (100 examples from this source).
101
+
102
+ # Who are the annotators?
103
+ The dataset was originaly by [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/) and has been validated by [Begoña Altura](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139).
104
+ The annotation took ~40 hours.
105
+
106
+