Add use example to model card.
Browse files
README.md
CHANGED
@@ -40,6 +40,27 @@ The same pipeline was run with two other transformer models and `fasttext` for c
|
|
40 |
Two best performing models have been compared with the Mann-Whitney U test to calculate p-values (** denotes p<0.01).
|
41 |
|
42 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
## Citation
|
44 |
|
45 |
If you use the model, please cite the following paper on which the original model is based:
|
|
|
40 |
Two best performing models have been compared with the Mann-Whitney U test to calculate p-values (** denotes p<0.01).
|
41 |
|
42 |
|
43 |
+
## Use example with `simpletransformers==0.63.7`
|
44 |
+
|
45 |
+
```python
|
46 |
+
from simpletransformers.classification import ClassificationModel
|
47 |
+
|
48 |
+
model = ClassificationModel("electra", "classla/bcms-bertic-parlasent-bcs-ter")
|
49 |
+
|
50 |
+
predictions, logits = model.predict([
|
51 |
+
"Đački autobusi moraju da voze svaki dan",
|
52 |
+
"Vi niste normalni",
|
53 |
+
"Da bog da ti saksida padne na glavu",
|
54 |
+
]
|
55 |
+
)
|
56 |
+
|
57 |
+
predictions
|
58 |
+
# Output: array([1, 0, 0])
|
59 |
+
|
60 |
+
[model.config.id2label[i] for i in predictions]
|
61 |
+
# Output: ['Other', 'Negative', 'Negative']
|
62 |
+
```
|
63 |
+
|
64 |
## Citation
|
65 |
|
66 |
If you use the model, please cite the following paper on which the original model is based:
|