t5-sentiment-base / README.md
zhang-yice's picture
Update README.md
4a2331b verified
metadata
license: cc-by-4.0

Distilling Fine-grained Sentiment Understanding from Large Language Models

Fine-grained sentiment analysis (FSA) aims to extract and summarize user opinions from vast opinionated text. Recent studies demonstrate that large language models (LLMs) possess exceptional sentiment understanding capabilities. However, directly deploying LLMs for FSA applications incurs high inference costs. Therefore, this paper investigates the distillation of fine-grained sentiment understanding from LLMs into small language models (SLMs). We prompt LLMs to examine and interpret the sentiments of given reviews and then utilize the generated content to pretrain SLMs. Additionally, we develop a comprehensive FSA benchmark to evaluate both SLMs and LLMs. Extensive experiments on this benchmark reveal that: (1) distillation significantly enhances the performance of SLMs in FSA tasks, achieving a 6.00% improvement in F1-score, and the distilled model can outperform Llama-2-7b with only 220M parameters; (2) distillation equips SLMs with excellent zero-shot sentiment classification capabilities, enabling them to match or even exceed their teacher models. These results suggest that distillation from LLMs is a highly promising direction for FSA.

@misc{zhang2024distillingfinegrainedsentimentunderstanding, 
      title={Distilling Fine-grained Sentiment Understanding from Large Language Models}, 
      author={Yice Zhang and Guangyu Xie and Hongling Xu and Kaiheng Hou and Jianzhu Bao and Qianlong Wang and Shiwei Chen and Ruifeng Xu}, 
      year={2024}, 
      eprint={2412.18552}, 
      archivePrefix={arXiv}, 
      primaryClass={cs.CL}, 
      url={https://arxiv.org/abs/2412.18552}, 
}