Update README.md
Browse files
README.md
CHANGED
@@ -80,32 +80,19 @@ dataset_info:
|
|
80 |
-
|
81 |
### Dataset Summary
|
82 |
a consolidated and cleaned up version of the opensources Fake News dataset
|
83 |
-
|
|
|
84 |
rumor clickbait, junk science, satire, hate and unknown. The articles were scraped between the end of 2017 and the beginning of 2018 from various
|
85 |
news websites, totaling 647 distinct sources, collecting articles dating from various years leading to the 2016 US elections and the year after.
|
86 |
Documents were classified based on their source, based on the curated website list provided by opensources.co using a leading to a
|
87 |
-
high imbalanced class distribution.
|
88 |
-
|
89 |
-
This consolidated and cleaned Fake News Corpus Dataset is a refined and enhanced version of the "opensources.co Fake News dataset".
|
90 |
-
This dataset, is a valuable resource for research and analysis in the domain of misinformation, disinformation, and fake news detection.
|
91 |
-
|
92 |
-
The oroginal Fake News Corpus comprises a vast collection of 8,529,090 individual articles, categorized into 12 distinct classes: reliable, unreliable, political, bias, fake, conspiracy,
|
93 |
-
rumor clickbait, junk science, satire, hate and unknown.
|
94 |
-
|
95 |
-
The articles within this dataset were systematically gathered by the original authors through web scraping activities
|
96 |
-
conducted between the end of 2017 and the beginning of 2018.
|
97 |
-
These articles were sourced from a diverse range of news websites, resulting in a compilation from 647 distinct online sources.
|
98 |
-
|
99 |
-
Documents were classified based on their source, using a curated website list provided by opensources.co,
|
100 |
-
a crowdsourced platform for tracking online information sources. Their proposed source classification method, was based on six criteria:
|
101 |
- Title and Domain name analysis,
|
102 |
- “About Us” analysis,
|
103 |
- source or study mentioning,
|
104 |
- writing style analysis,
|
105 |
- aesthetic analysis and social media analysis.
|
106 |
|
107 |
-
|
108 |
-
After extensive data cleaning and duplicate removal we retain 5,915,569 records
|
109 |
|
110 |
### Languages
|
111 |
|
|
|
80 |
-
|
81 |
### Dataset Summary
|
82 |
a consolidated and cleaned up version of the opensources Fake News dataset
|
83 |
+
|
84 |
+
Fake News Corpus comprises 8,529,090 individual articles, classified into 12 classes: reliable, unreliable, political, bias, fake, conspiracy,
|
85 |
rumor clickbait, junk science, satire, hate and unknown. The articles were scraped between the end of 2017 and the beginning of 2018 from various
|
86 |
news websites, totaling 647 distinct sources, collecting articles dating from various years leading to the 2016 US elections and the year after.
|
87 |
Documents were classified based on their source, based on the curated website list provided by opensources.co using a leading to a
|
88 |
+
high imbalanced class distribution. Their proposed source classification method, was based on six criteria:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
89 |
- Title and Domain name analysis,
|
90 |
- “About Us” analysis,
|
91 |
- source or study mentioning,
|
92 |
- writing style analysis,
|
93 |
- aesthetic analysis and social media analysis.
|
94 |
|
95 |
+
After extensive data cleaning and duplicate removal we retain **5,915,569** records
|
|
|
96 |
|
97 |
### Languages
|
98 |
|