saattrupdan commited on
Commit
2036320
1 Parent(s): dbfd7b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -12,10 +12,10 @@ The model achieves SOTA on a test set consisting of 600 Facebook comments annota
12
 
13
  | **Model** | **Precision** | **Recall** | **F1-score** | **F2-score** |
14
  | :-------- | :------------ | :--------- | :----------- | :----------- |
15
- | `alexandrainst/danoff-base` (this) | 74.81% | **89.77%** | **81.61%** | **86.32%** |
16
- | [`alexandrainst/danoff-small`](https://huggingface.co/alexandrainst/danoff-small) | 74.13% | 89.30% | 81.01% | 85.79% |
17
  | [`A&ttack`](https://github.com/ogtal/A-ttack) | **97.32%** | 50.70% | 66.67% | 56.07% |
18
- | [`alexandrainst/da-electra-hatespeech-detection`](https://huggingface.co/alexandrainst/da-electra-hatespeech-detection) | 86.43% | 56.28% | 68.17% | 60.50% |
19
  | [`Guscode/DKbert-hatespeech-detection`](https://huggingface.co/Guscode/DKbert-hatespeech-detection) | 75.41% | 42.79% | 54.60% | 46.84% |
20
 
21
  ## Using the model
@@ -24,7 +24,7 @@ You can use the model simply by running the following:
24
 
25
  ```python
26
  >>> from transformers import pipeline
27
- >>> offensive_text_pipeline = pipeline(model="alexandrainst/danoff-base")
28
  >>> offensive_text_pipeline("Din store idiot")
29
  [{'label': 'Offensive', 'score': 0.9997463822364807}]
30
  ```
 
12
 
13
  | **Model** | **Precision** | **Recall** | **F1-score** | **F2-score** |
14
  | :-------- | :------------ | :--------- | :----------- | :----------- |
15
+ | `alexandrainst/da-offensive-detection-base` (this) | 74.81% | **89.77%** | **81.61%** | **86.32%** |
16
+ | [`alexandrainst/da-offensive-detection-base`](https://huggingface.co/alexandrainst/da-offensive-detection-base) | 74.13% | 89.30% | 81.01% | 85.79% |
17
  | [`A&ttack`](https://github.com/ogtal/A-ttack) | **97.32%** | 50.70% | 66.67% | 56.07% |
18
+ | [`alexandrainst/da-hatespeech-detection-small`](https://huggingface.co/alexandrainst/da-hatespeech-detection-small) | 86.43% | 56.28% | 68.17% | 60.50% |
19
  | [`Guscode/DKbert-hatespeech-detection`](https://huggingface.co/Guscode/DKbert-hatespeech-detection) | 75.41% | 42.79% | 54.60% | 46.84% |
20
 
21
  ## Using the model
 
24
 
25
  ```python
26
  >>> from transformers import pipeline
27
+ >>> offensive_text_pipeline = pipeline(model="alexandrainst/da-offensive-detection-base")
28
  >>> offensive_text_pipeline("Din store idiot")
29
  [{'label': 'Offensive', 'score': 0.9997463822364807}]
30
  ```