DialoGPT-large-faqs-block-size-128-bs-16-lr-1e-05-deepspeed-stage2

This model is a fine-tuned version of microsoft/DialoGPT-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.4123

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 40 4.2793
No log 2.0 80 3.5752
No log 3.0 120 3.1238
No log 4.0 160 2.8875
No log 5.0 200 2.7358
No log 6.0 240 2.6321
No log 7.0 280 2.5629
No log 8.0 320 2.5147
No log 9.0 360 2.4783
No log 10.0 400 2.4595
No log 11.0 440 2.4370
No log 12.0 480 2.4229
2.7646 13.0 520 2.4167
2.7646 14.0 560 2.4109
2.7646 15.0 600 2.4084
2.7646 16.0 640 2.4146
2.7646 17.0 680 2.4085
2.7646 18.0 720 2.4139
2.7646 19.0 760 2.4137
2.7646 20.0 800 2.4123

Framework versions

  • Transformers 4.33.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4.dev0
  • Tokenizers 0.13.3
Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for DrishtiSharma/DialoGPT-large-faqs-block-size-128-bs-16-lr-1e-05-deepspeed-stage2

Finetuned
(19)
this model