Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Languages:
Hebrew
Size:
10K - 100K
License:
license: mit | |
task_categories: | |
- text-classification | |
language: | |
- he | |
## Sentiment Analysis Data for the Hebrew Language | |
**Dataset Description:** | |
This dataset contains a sentiment analysis dataset from Amram et al. (2018). | |
**Data Structure:** | |
The data was used for the project on [improving word embeddings with graph knowledge for Low Resource Languages](https://github.com/pyRis/retrofitting-embeddings-lrls?tab=readme-ov-file). | |
**Citation:** | |
```bibtex | |
@inproceedings{amram-etal-2018-representations, | |
title = "Representations and Architectures in Neural Sentiment Analysis for Morphologically Rich Languages: A Case Study from {M}odern {H}ebrew", | |
author = "Amram, Adam and | |
Ben David, Anat and | |
Tsarfaty, Reut", | |
editor = "Bender, Emily M. and | |
Derczynski, Leon and | |
Isabelle, Pierre", | |
booktitle = "Proceedings of the 27th International Conference on Computational Linguistics", | |
month = aug, | |
year = "2018", | |
address = "Santa Fe, New Mexico, USA", | |
publisher = "Association for Computational Linguistics", | |
url = "https://aclanthology.org/C18-1190", | |
pages = "2242--2252", | |
abstract = "This paper empirically studies the effects of representation choices on neural sentiment analysis for Modern Hebrew, a morphologically rich language (MRL) for which no sentiment analyzer currently exists. We study two dimensions of representational choices: (i) the granularity of the input signal (token-based vs. morpheme-based), and (ii) the level of encoding of vocabulary items (string-based vs. character-based). We hypothesise that for MRLs, languages where multiple meaning-bearing elements may be carried by a single space-delimited token, these choices will have measurable effects on task perfromance, and that these effects may vary for different architectural designs {---} fully-connected, convolutional or recurrent. Specifically, we hypothesize that morpheme-based representations will have advantages in terms of their generalization capacity and task accuracy, due to their better OOV coverage. To empirically study these effects, we develop a new sentiment analysis benchmark for Hebrew, based on 12K social media comments, and provide two instances of these data: in token-based and morpheme-based settings. Our experiments show that representation choices empirical effects vary with architecture type. While fully-connected and convolutional networks slightly prefer token-based settings, RNNs benefit from a morpheme-based representation, in accord with the hypothesis that explicit morphological information may help generalize. Our endeavour also delivers the first state-of-the-art broad-coverage sentiment analyzer for Hebrew, with over 89{\%} accuracy, alongside an established benchmark to further study the effects of linguistic representation choices on neural networks{'} task performance.", | |
} | |
``` |