Edit model card

Model Info

This model was developed/finetuned for product review task for Turkish Language. Model was finetuned via hepsiburada.com product review dataset.

  • LABEL_0: negative review
  • LABEL_1: positive review

Model Sources

Preprocessing

You must apply removing stopwords, stemming, or lemmatization process for Turkish.

Results

  • auprc = 0.9588538437395457
  • auroc = 0.9653234951018236
  • eval_loss = 0.37227460598843365
  • fn = 188
  • fp = 288
  • mcc = 0.826593937301856
  • tn = 2479
  • tp = 2516
  • Accuracy: %91.30

Citation

BibTeX:

@INPROCEEDINGS{9559007, author={Guven, Zekeriya Anil}, booktitle={2021 6th International Conference on Computer Science and Engineering (UBMK)}, title={The Effect of BERT, ELECTRA and ALBERT Language Models on Sentiment Analysis for Turkish Product Reviews}, year={2021}, volume={}, number={}, pages={629-632}, keywords={Computer science;Sentiment analysis;Analytical models;Computational modeling;Bit error rate;Time factors;Random forests;Sentiment Analysis;Language Model;Product Review;Machine Learning;E-commerce}, doi={10.1109/UBMK52708.2021.9559007}}

APA:

Guven, Z. A. (2021, September). The effect of bert, electra and albert language models on sentiment analysis for turkish product reviews. In 2021 6th International Conference on Computer Science and Engineering (UBMK) (pp. 629-632). IEEE.

Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train anilguven/albert_tr_turkish_product_reviews

Collection including anilguven/albert_tr_turkish_product_reviews