mr / README.md
sh0416's picture
Update README.md
74a1a0f
metadata
task_categories:
  - text-classification
language:
  - en

Movie Review Data

Original README

=======

Introduction

This README v1.0 (June, 2005) for the v1.0 sentence polarity dataset comes from the URL http://www.cs.cornell.edu/people/pabo/movie-review-data .

=======

Citation Info

This data was first used in Bo Pang and Lillian Lee, ``Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales.'', Proceedings of the ACL, 2005.

@InProceedings{Pang+Lee:05a, author = {Bo Pang and Lillian Lee}, title = {Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales}, booktitle = {Proceedings of the ACL}, year = 2005 }

=======

Data Format Summary

  • rt-polaritydata.tar.gz: contains this readme and two data files that were used in the experiments described in Pang/Lee ACL 2005.

    Specifically:

    • rt-polarity.pos contains 5331 positive snippets
    • rt-polarity.neg contains 5331 negative snippets

    Each line in these two files corresponds to a single snippet (usually containing roughly one single sentence); all snippets are down-cased.
    The snippets were labeled automatically, as described below (see section "Label Decision").

    Note: The original source files from which the data in rt-polaritydata.tar.gz was derived can be found in the subjective part (Rotten Tomatoes pages) of subjectivity_html.tar.gz (released with subjectivity dataset v1.0).

=======

Label Decision

We assumed snippets (from Rotten Tomatoes webpages) for reviews marked with fresh'' are positive, and those for reviews marked with rotten'' are negative.

Preprocessing

To make csv with text and label field, we use the following script.

import csv
import random

# NOTE: The encoding of original file is "latin_1". We will change it to "utf8".
with open("rt-polarity.pos", encoding="latin_1") as f:
    texts_pos = [line.strip() for line in f]
with open("rt-polarity.neg", encoding="latin_1") as f:
    texts_neg = [line.strip() for line in f]

rows_pos = [{"text": text, "label": 1} for text in texts_pos]
rows_neg = [{"text": text, "label": 0} for text in texts_pos]

# NOTE: For fair validation, we split it into train and test. Also, for the research who wants to use different setting, we provide whole setting.
# NOTE: We follow the split setting in LM-BFF paper.
rows_whole = rows_pos + rows_neg
random.Random(42).shuffle(rows_whole)
rows_test, rows_train = rows_whole[:2000], rows_whole[2000:]

with open("whole.csv", "w", encoding="utf8") as f:
    writer = csv.DictWriter(f, fieldnames=["text", "label"])
    writer.writerows(rows_train)
with open("train.csv", "w", encoding="utf8") as f:
    writer = csv.DictWriter(f, fieldnames=["text", "label"])
    writer.writerows(rows_train)
with open("test.csv", "w", encoding="utf8") as f:
    writer = csv.DictWriter(f, fieldnames=["text", "label"])
    writer.writerows(rows_test)