Datasets:
mteb
/

Modalities:
Text
Formats:
json
Languages:
English
Libraries:
Datasets
pandas
nouamanetazi HF staff commited on
Commit
60f9ab8
1 Parent(s): 83a9802

clone from SetFit/toxic_conversations_50k

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +8 -0
  3. prepare.py +45 -0
.gitattributes CHANGED
@@ -35,3 +35,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
35
  *.mp3 filter=lfs diff=lfs merge=lfs -text
36
  *.ogg filter=lfs diff=lfs merge=lfs -text
37
  *.wav filter=lfs diff=lfs merge=lfs -text
 
 
35
  *.mp3 filter=lfs diff=lfs merge=lfs -text
36
  *.ogg filter=lfs diff=lfs merge=lfs -text
37
  *.wav filter=lfs diff=lfs merge=lfs -text
38
+ *.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # Toxic Conversation
2
+ This is a version of the [Jigsaw Unintended Bias in Toxicity Classification dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview). It contains comments from the Civil Comments platform together with annotations if the comment is toxic or not.
3
+
4
+ This dataset just contains the first 50k training examples.
5
+
6
+ 10 annotators annotated each example and, as recommended in the task page, set a comment as toxic when target >= 0.5
7
+
8
+ The dataset is inbalanced, with only about 8% of the comments marked as toxic.
prepare.py ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ from collections import Counter
3
+ import json
4
+ import random
5
+
6
+
7
+ df = pd.read_csv("original.csv")
8
+
9
+ print(df)
10
+ """
11
+ for field in ["target", "severe_toxicity", "obscene", "identity_attack", "insult", "threat"]:
12
+ print("\n\n", field)
13
+ num_greater = 0
14
+ for val in df[field]:
15
+ if val >= 0.5:
16
+ num_greater += 1
17
+
18
+ print(num_greater, len(df[field]), f"{num_greater/len(df[field])*100:.2f}%")
19
+ """
20
+
21
+
22
+ rows = [{'text': row['comment_text'].strip(),
23
+ 'label': 1 if row['target'] >= 0.5 else 0,
24
+ 'label_text': "toxic" if row['target'] >= 0.5 else "not toxic",
25
+ } for idx, row in df.iterrows()]
26
+
27
+ random.seed(42)
28
+ random.shuffle(rows)
29
+
30
+ num_test = 50000
31
+ splits = {'test': rows[0:num_test], 'train': rows[num_test:]}
32
+
33
+ print("Train:", len(splits['train']))
34
+ print("Test:", len(splits['test']))
35
+
36
+ num_labels = Counter()
37
+
38
+ for row in splits['test']:
39
+ num_labels[row['label']] += 1
40
+ print(num_labels)
41
+
42
+ for split in ['train', 'test']:
43
+ with open(f'{split}.jsonl', 'w') as fOut:
44
+ for row in splits[split]:
45
+ fOut.write(json.dumps(row)+"\n")