DarwinAnim8or
commited on
Commit
•
ffea759
1
Parent(s):
571e55d
Update README.md
Browse files
README.md
CHANGED
@@ -2,20 +2,40 @@
|
|
2 |
tags:
|
3 |
- autotrain
|
4 |
- text-classification
|
|
|
|
|
5 |
language:
|
6 |
-
-
|
7 |
widget:
|
8 |
-
- text:
|
9 |
-
|
10 |
-
-
|
|
|
|
|
|
|
11 |
co2_eq_emissions:
|
12 |
emissions: 0.6833689692559574
|
|
|
|
|
|
|
13 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
-
|
16 |
|
17 |
- Problem type: Multi-class Classification
|
18 |
-
- Model ID: 83947142400
|
19 |
- CO2 Emissions (in grams): 0.6834
|
20 |
|
21 |
## Validation Metrics
|
@@ -38,7 +58,7 @@ co2_eq_emissions:
|
|
38 |
You can use cURL to access this model:
|
39 |
|
40 |
```
|
41 |
-
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love
|
42 |
```
|
43 |
|
44 |
Or Python API:
|
@@ -46,9 +66,9 @@ Or Python API:
|
|
46 |
```
|
47 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
48 |
|
49 |
-
model = AutoModelForSequenceClassification.from_pretrained("
|
50 |
|
51 |
-
tokenizer = AutoTokenizer.from_pretrained("
|
52 |
|
53 |
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
|
54 |
|
|
|
2 |
tags:
|
3 |
- autotrain
|
4 |
- text-classification
|
5 |
+
- emoji
|
6 |
+
- sentiment
|
7 |
language:
|
8 |
+
- en
|
9 |
widget:
|
10 |
+
- text: I love apples
|
11 |
+
- text: I hate apples
|
12 |
+
- text: I hate it when they don't listen
|
13 |
+
- text: I hate it when they don't listen :(
|
14 |
+
- text: It's so cosy
|
15 |
+
- text: there's nothing like nature
|
16 |
co2_eq_emissions:
|
17 |
emissions: 0.6833689692559574
|
18 |
+
license: openrail
|
19 |
+
datasets:
|
20 |
+
- adorkin/extended_tweet_emojis
|
21 |
---
|
22 |
+
# Emoji Suggester
|
23 |
+
This model is a text generation model that can suggest emojis based on a given text. It uses the deberta-v3-base model as a backbone.
|
24 |
+
|
25 |
+
## Training Data
|
26 |
+
The dataset this was trained on has had it's emoji's replaced with the unicode characters rather than an index, which required a seperate file to map the indices to.
|
27 |
+
The dataset was further modified in the following ways:
|
28 |
+
* The "US" emoji was removed, as it serves very little purpose in general conversation.
|
29 |
+
* The dataset was deduped
|
30 |
+
* The amount of times each emoji appears in the dataset is more or less even to all the others; preventing the model from becoming heavily biased on the emojis that appear more often in training data.
|
31 |
+
|
32 |
+
## Intended uses & limitations
|
33 |
+
|
34 |
+
This model is intended to be used for fun and entertainment purposes, such as adding emojis to social media posts, messages, or emails. It is not intended to be used for any serious or sensitive applications, such as sentiment analysis, emotion recognition, or hate speech detection. The model may not be able to handle texts that are too long, complex, or ambiguous, and may generate inappropriate or irrelevant emojis in some cases. The model may also reflect the biases and stereotypes present in the training data, such as gender, race, or culture. Users are advised to use the model with caution and discretion.
|
35 |
|
36 |
+
## Model Training Info
|
37 |
|
38 |
- Problem type: Multi-class Classification
|
|
|
39 |
- CO2 Emissions (in grams): 0.6834
|
40 |
|
41 |
## Validation Metrics
|
|
|
58 |
You can use cURL to access this model:
|
59 |
|
60 |
```
|
61 |
+
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love apples"}' https://api-inference.huggingface.co/models/KoalaAI/Emoji-Suggester
|
62 |
```
|
63 |
|
64 |
Or Python API:
|
|
|
66 |
```
|
67 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
68 |
|
69 |
+
model = AutoModelForSequenceClassification.from_pretrained("KoalaAI/Emoji-Suggester", use_auth_token=True)
|
70 |
|
71 |
+
tokenizer = AutoTokenizer.from_pretrained("KoalaAI/Emoji-Suggester", use_auth_token=True)
|
72 |
|
73 |
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
|
74 |
|