metadata
language:
- en
license: mit
base_model: gpt2
tags:
- pytorch
- GPT2ForSequenceClassification
- generated_from_trainer
metrics:
- accuracy
- matthews_correlation
model-index:
- name: GPT2-genre-detection
results: []
library_name: transformers
pipeline_tag: text-classification
datasets:
- datadrivenscience/movie-genre-prediction
GPT2-genre-detection
This model is a fine-tuned version of gpt2 on the datadrivenscience/movie-genre-prediction dataset. It achieves the following results on the evaluation set:
- Loss: 1.5267
- Accuracy: 0.4593
- Matthews Correlation: 0.1010
description
Data-Driven Science organized a competition where in the goal was to fine tune a model that can predict the genre of a movie from a given synopsis. There were a total of 10 genres as follows:
{
"0": "horror",
"1": "adventure",
"2": "action",
"3": "crime",
"4": "mystery",
"5": "family",
"6": "scifi",
"7": "thriller",
"8": "fantasy",
"9": "romance"
}
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 85855289
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Matthews Correlation |
---|---|---|---|---|---|
1.5765 | 1.0 | 10125 | 1.5562 | 0.4589 | 0.0899 |
1.5058 | 2.0 | 20250 | 1.5267 | 0.4593 | 0.1010 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0