Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,9 @@ widget:
|
|
11 |
DistilCamemBERT-Sentiment
|
12 |
=========================
|
13 |
|
14 |
-
We present DistilCamemBERT-Sentiment which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine tuned for the sentiment analysis task for the French language.
|
|
|
|
|
15 |
|
16 |
Dataset
|
17 |
-------
|
@@ -22,5 +24,5 @@ Evaluation results
|
|
22 |
Benchmark
|
23 |
---------
|
24 |
|
25 |
-
How to use DistilCamemBERT-
|
26 |
-
|
|
|
11 |
DistilCamemBERT-Sentiment
|
12 |
=========================
|
13 |
|
14 |
+
We present DistilCamemBERT-Sentiment which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine tuned for the sentiment analysis task for the French language. This model is construct over 2 datasets: [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi) and [allocine](https://huggingface.co/datasets/allocine) to aims minimize the biais. Inded, Amazon review are very similare beetwen the messages and relatevely short. To opposate Allocine criticims are long and rich text.
|
15 |
+
|
16 |
+
This modelisation is closely of [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) base on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase for example. Indeed, inference cost can be a technological issue. To counteract this effect, we propose this modelization which **divides the inference time by 2** with the same consumption power thanks to [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base).
|
17 |
|
18 |
Dataset
|
19 |
-------
|
|
|
24 |
Benchmark
|
25 |
---------
|
26 |
|
27 |
+
How to use DistilCamemBERT-Sentiment
|
28 |
+
------------------------------------
|