File size: 3,101 Bytes
586845b
 
d803f8f
 
 
 
 
 
751094a
 
fb254e5
751094a
 
 
 
 
 
 
11d1a6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
751094a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
license: mit
datasets:
- stereoset
language:
- en
metrics:
- seqeval
---

# Multidimensional Token-Level Bias Classifier

The Token-Level Bias Classifier is a transformer-based model developed to detect and classify different types of biases present in text at the token level. It is designed to recognize stereotypical and anti-stereotypical biases towards gender, race, profession, and religion. The model can help in developing applications aimed at mitigating biased language use and promoting fairness and inclusivity in natural language processing tasks.

## Model Architecture

The model is built using the `distilbert-base-uncased` pretrained model, a smaller and faster version of BERT. It is fine-tuned on a custom dataset for the task of token-level bias classification. The model uses a Token Classification architecture, typically used for Named Entity Recognition (NER) tasks.

## Model Performance

| Metric                 | Value                   |
|------------------------|-------------------------|
| eval_loss              | 0.03554883599281311     |
| eval_precision         | 0.7868185694908753      |
| eval_recall            | 0.7662314481801649      |
| eval_f1                | 0.7739129932274338      |
| eval_balanced accuracy | 0.7662314481801649      |
| eval_runtime           | 4.5554                  |
| eval_samples_per_second| 1196.818                |
| eval_steps_per_second  | 74.856                  |
| epoch                  | 6.0                     |


## Classes

The model identifies nine classes, including:

1. unrelated: The token does not indicate any bias.
2. stereotype_gender: The token indicates a gender stereotype.
3. anti-stereotype_gender: The token indicates an anti-gender stereotype.
4. stereotype_race: The token indicates a racial stereotype.
5. anti-stereotype_race: The token indicates an anti-racial stereotype.
6. stereotype_profession: The token indicates a professional stereotype.
7. anti-stereotype_profession: The token indicates an anti-professional stereotype.
8. stereotype_religion: The token indicates a religious stereotype.
9. anti-stereotype_religion: The token indicates an anti-religious stereotype.

## Usage

The model can be used as a part of the Hugging Face's pipeline for Named Entity Recognition (NER).

```python
from transformers import pipeline

nlp = pipeline("ner", model="wu981526092/token-level-bias-detector", tokenizer="wu981526092/token-level-bias-detector")
result = nlp("Text containing potential bias...")

print(result)
```

## Performance

The performance of the model can vary depending on the specifics of the text being analyzed. It's recommended to evaluate the model on your specific task and text data to ensure it meets your requirements.

## Limitations and Bias

While the model is designed to detect bias, it may not be perfect in its detections due to the complexities and subtleties of language. Biases detected by the model do not represent endorsement of these biases. The model may also misclassify some tokens due to the limitation of BERT's WordPiece tokenization approach.