File size: 1,702 Bytes
9308885
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
datasets:
- imdb
language:
- en
library_name: transformers
pipeline_tag: text-classification
tags:
- movies
- gpt2
- sentiment-analysis
- fine-tuned
---

# Fine-tuned GPT-2 Model for IMDb Movie Review Sentiment Analysis

## Model Description

This is a GPT-2 model fine-tuned on the IMDb movie review dataset for sentiment analysis. It classifies a movie review text into two classes: "positive" or "negative".

## Intended Uses & Limitations

This model is intended to be used for binary sentiment analysis of English movie reviews. It can determine whether a review is positive or negative. It should not be used for languages other than English, or for text with ambiguous sentiment.

## How to Use

Here's a simple way to use this model:

```python
from transformers import GPT2Tokenizer, GPT2ForSequenceClassification

tokenizer = GPT2Tokenizer.from_pretrained("hipnologo/gpt2-imdb-finetune")
model = GPT2ForSequenceClassification.from_pretrained("hipnologo/gpt2-imdb-finetune")

text = "Your review text here!"

# encoding the input text
input_ids = tokenizer.encode(text, return_tensors="pt")

# Move the input_ids tensor to the same device as the model
input_ids = input_ids.to(model.device)

# getting the logits 
logits = model(input_ids).logits

# getting the predicted class
predicted_class = logits.argmax(-1).item()

print(f"The sentiment predicted by the model is: {'Positive' if predicted_class == 1 else 'Negative'}")
```

## Training Procedure
The model was trained using the 'Trainer' class from the transformers library, with a learning rate of 2e-5, batch size of 1, and 3 training epochs.

## Fine-tuning Details
The model was fine-tuned using the IMDb movie review dataset.