Cyrile's picture
Update README.md
e71b20e
|
raw
history blame
1.45 kB
---
language: fr
license: mit
datasets:
- amazon_reviews_multi
- allocine
widget:
- text: "Je pensais lire un livre nul, mais finalement je l'ai trouvé super..."
---
DistilCamemBERT-Sentiment
=========================
We present DistilCamemBERT-Sentiment which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine tuned for the sentiment analysis task for the French language. This model is constructed over 2 datasets: [Amazon Reviews](https://huggingface.co/datasets/amazon_reviews_multi) and [Allociné.fr](https://huggingface.co/datasets/allocine) in order to minimize the bias. Indeed, Amazon reviews are very similar in the messages and relatively shorts, contrary to Allociné critics which are long and rich texts.
This modelization is close to [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase for example. Indeed, inference cost can be a technological issue. To counteract this effect, we propose this modelization which **divides the inference time by 2** with the same consumption power thanks to [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base).
Dataset
-------
Evaluation results
------------------
Benchmark
---------
How to use DistilCamemBERT-Sentiment
------------------------------------