michellejieli
commited on
Commit
•
0a3b214
1
Parent(s):
b875214
Add readme
Browse files
README.md
CHANGED
@@ -1,3 +1,46 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language: "en"
|
3 |
+
tags:
|
4 |
+
- distilroberta
|
5 |
+
- sentiment
|
6 |
+
- NSFW
|
7 |
+
- inappropriate
|
8 |
+
- spam
|
9 |
+
- twitter
|
10 |
+
- reddit
|
11 |
+
|
12 |
+
widget:
|
13 |
+
- text: "."
|
14 |
+
- text: "What?"
|
15 |
+
- text: "Wow, congratulations! So excited for you!"
|
16 |
+
|
17 |
---
|
18 |
+
|
19 |
+
# Fine-tuned DistilRoBERTa-base for NSFW Classification
|
20 |
+
|
21 |
+
# Model Description
|
22 |
+
|
23 |
+
DistilRoBERTa-base is a transformer model that performs sentiment analysis. I fine-tuned the model on Reddit posts with the purpose of classifying not safe for work (NSFW) content that is considered inappropriate and unprofessional. The model predicts 2 classes, which are NSFW or safe for work (SFW).
|
24 |
+
|
25 |
+
The model is a fine-tuned version of [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert).
|
26 |
+
|
27 |
+
It was fine-tuned on 14317 Reddit posts pulled from the (Reddit API) [https://praw.readthedocs.io/en/stable/].
|
28 |
+
|
29 |
+
# How to Use
|
30 |
+
|
31 |
+
```python
|
32 |
+
from transformers import pipeline
|
33 |
+
classifier = pipeline("sentiment-analysis", model="michellejieli/NSFW_text_classification")
|
34 |
+
classifier("I see you’ve set aside this special time to humiliate yourself in public.")
|
35 |
+
```
|
36 |
+
|
37 |
+
```python
|
38 |
+
Output:
|
39 |
+
[{'label': 'NSFW', 'score': 0.998853325843811}]
|
40 |
+
```
|
41 |
+
|
42 |
+
# Contact
|
43 |
+
|
44 |
+
Please reach out to [michelle.li851@duke.edu](mailto:michelle.li851@duke.edu) if you have any questions or feedback.
|
45 |
+
|
46 |
+
---
|