migueladarlo
commited on
Commit
•
3a6c95c
1
Parent(s):
94fad51
Update README.md
Browse filesAdded How to use, as I can't get the tokenizer to load from the web interface.
README.md
CHANGED
@@ -33,6 +33,24 @@ Feed a corpus of tweets to the model to generate label if input is indicative of
|
|
33 |
|
34 |
Limitation: All token sequences longer than 512 are automatically truncated.
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
## Training hyperparameters
|
37 |
|
38 |
The following hyperparameters were used during training:
|
|
|
33 |
|
34 |
Limitation: All token sequences longer than 512 are automatically truncated.
|
35 |
|
36 |
+
### How to use
|
37 |
+
|
38 |
+
You can use this model directly with a pipeline for sentiment analysis:
|
39 |
+
|
40 |
+
```python
|
41 |
+
>>> from transformers import DistilBertTokenizerFast, AutoTokenizer
|
42 |
+
>>> tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')
|
43 |
+
>>> from transformers import DistilBertForSequenceClassification
|
44 |
+
>>> model = DistilBertForSequenceClassification.from_pretrained(r"distilbert-depression-base")
|
45 |
+
>>> from transformers import pipeline
|
46 |
+
>>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
|
47 |
+
>>> tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512}
|
48 |
+
>>> result=classifier('pain peko',**tokenizer_kwargs) #For truncation to apply in the pipeline
|
49 |
+
|
50 |
+
|
51 |
+
[{'label': 'LABEL_1', 'score': 0.5048992037773132}]
|
52 |
+
```
|
53 |
+
|
54 |
## Training hyperparameters
|
55 |
|
56 |
The following hyperparameters were used during training:
|