saattrupdan commited on
Commit
581ecd0
1 Parent(s): 304257b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ model-index:
4
+ - name: electra-small-offensive-text-detection-da
5
+ results: []
6
+ widget:
7
+ - text: "Din store idiot"
8
+ ---
9
+
10
+ # Danish Offensive Text Detection based on ELECTRA-small
11
+
12
+ This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on a dataset consisting of approximately 5 million Facebook comments on [DR](https://dr.dk/)'s public Facebook pages. The labels have been automatically generated using weak supervision, based on the [Snorkel](https://www.snorkel.org/) framework.
13
+
14
+ The model achieves second place on a test set consisting of 500 Facebook comments annotated by two people, of which 41.2% were labelled as offensive:
15
+
16
+ | **Model** | **Precision** | **Recall** | **F1-score** |
17
+ | :-------- | :------------ | :--------- | :----------- |
18
+ | `alexandrainst/electra-small-offensive-text-detection-da` | 85.45% | 91.26% | **88.26%** |
19
+ | [`alexandrainst/xlm-roberta-base-offensive-text-detection-da`](https://huggingface.co/alexandrainst/xlm-roberta-base-offensive-text-detection-da) | 83.48% | **93.20%** | 88.07% |
20
+ | [`A-ttack`](https://github.com/ogtal/A-ttack) | **99.17%** | 58.25% | 73.39% |
21
+ | [`DaNLP/da-electra-hatespeech-detection`](https://huggingface.co/DaNLP/da-electra-hatespeech-detection) | 92.19% | 57.28% | 70.66% |
22
+ | [`Guscode/DKbert-hatespeech-detection`](https://huggingface.co/Guscode/DKbert-hatespeech-detection) | 84.91% | 43.69% | 57.69% |
23
+
24
+ ## Training procedure
25
+
26
+ ### Training hyperparameters
27
+
28
+ The following hyperparameters were used during training:
29
+ - learning_rate: 2e-05
30
+ - train_batch_size: 32
31
+ - eval_batch_size: 32
32
+ - gradient_accumulation_steps: 1
33
+ - total_train_batch_size: 32
34
+ - seed: 4242
35
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
36
+ - lr_scheduler_type: linear
37
+ - max_steps: 500000
38
+ - fp16: True
39
+ - eval_steps: 1000
40
+ - early_stopping_patience: 100
41
+
42
+
43
+ ### Framework versions
44
+
45
+ - Transformers 4.20.1
46
+ - Pytorch 1.11.0+cu113
47
+ - Datasets 2.3.2
48
+ - Tokenizers 0.12.1