wu981526092
commited on
Commit
•
c5a27a6
1
Parent(s):
558a4bd
Update README.md
Browse files
README.md
CHANGED
@@ -12,13 +12,13 @@ metrics:
|
|
12 |
- accuracy
|
13 |
---
|
14 |
|
15 |
-
# Token-Level
|
16 |
|
17 |
-
The Token-Level
|
18 |
|
19 |
## Model Architecture
|
20 |
|
21 |
-
The model is built using the pretrained model. It is fine-tuned on a custom dataset for the task of sentence-level
|
22 |
|
23 |
## Model Performance
|
24 |
|
@@ -39,7 +39,7 @@ The model is built using the pretrained model. It is fine-tuned on a custom data
|
|
39 |
|
40 |
The model identifies nine classes, including:
|
41 |
|
42 |
-
1. unrelated: The token does not indicate any
|
43 |
2. stereotype_gender: The token indicates a gender stereotype.
|
44 |
3. anti-stereotype_gender: The token indicates an anti-gender stereotype.
|
45 |
4. stereotype_race: The token indicates a racial stereotype.
|
@@ -56,16 +56,8 @@ The model can be used as a part of the Hugging Face's pipeline for Named Entity
|
|
56 |
```python
|
57 |
from transformers import pipeline
|
58 |
|
59 |
-
nlp = pipeline("ner", model="wu981526092/Token-Level-
|
60 |
-
result = nlp("Text containing potential
|
61 |
|
62 |
print(result)
|
63 |
```
|
64 |
-
|
65 |
-
## Performance
|
66 |
-
|
67 |
-
The performance of the model can vary depending on the specifics of the text being analyzed. It's recommended to evaluate the model on your specific task and text data to ensure it meets your requirements.
|
68 |
-
|
69 |
-
## Limitations and Bias
|
70 |
-
|
71 |
-
While the model is designed to detect bias, it may not be perfect in its detections due to the complexities and subtleties of language. Biases detected by the model do not represent endorsement of these biases. The model may also misclassify some tokens due to the limitation of BERT's WordPiece tokenization approach.
|
|
|
12 |
- accuracy
|
13 |
---
|
14 |
|
15 |
+
# Token-Level Stereotype Classifier
|
16 |
|
17 |
+
The Token-Level Stereotype Classifier is a transformer-based model developed to detect and classify different types of stereotypes present in text at the token level. It is designed to recognize stereotypical and anti-stereotypical stereotypes towards gender, race, profession, and religion. The model can help in developing applications aimed at mitigating stereotypical language use and promoting fairness and inclusivity in natural language processing tasks.
|
18 |
|
19 |
## Model Architecture
|
20 |
|
21 |
+
The model is built using the pretrained model. It is fine-tuned on a custom dataset for the task of sentence-level stereotype classification. The model uses a Sentence Classification architecture, typically used for Text Classification tasks.
|
22 |
|
23 |
## Model Performance
|
24 |
|
|
|
39 |
|
40 |
The model identifies nine classes, including:
|
41 |
|
42 |
+
1. unrelated: The token does not indicate any stereotype.
|
43 |
2. stereotype_gender: The token indicates a gender stereotype.
|
44 |
3. anti-stereotype_gender: The token indicates an anti-gender stereotype.
|
45 |
4. stereotype_race: The token indicates a racial stereotype.
|
|
|
56 |
```python
|
57 |
from transformers import pipeline
|
58 |
|
59 |
+
nlp = pipeline("ner", model="wu981526092/Token-Level-Stereotype-Detector", tokenizer="wu981526092/Token-Level-Stereotype-Detector")
|
60 |
+
result = nlp("Text containing potential stereotype...")
|
61 |
|
62 |
print(result)
|
63 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|