Edit model card

results

This model is a fine-tuned version of Salesforce/codet5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4923

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
1.8427 0.0719 10 1.2732
1.161 0.1439 20 0.8768
1.0161 0.2158 30 0.8193
0.9054 0.2878 40 0.7536
0.8195 0.3597 50 0.7090
0.9785 0.4317 60 0.6815
0.8085 0.5036 70 0.6641
0.8913 0.5755 80 0.6521
0.6526 0.6475 90 0.6326
0.6447 0.7194 100 0.6177
0.6836 0.7914 110 0.6067
0.7824 0.8633 120 0.5950
0.5367 0.9353 130 0.5840
0.7856 1.0072 140 0.5763
0.6118 1.0791 150 0.5712
0.7442 1.1511 160 0.5639
0.5305 1.2230 170 0.5587
0.6777 1.2950 180 0.5507
0.5568 1.3669 190 0.5469
0.7629 1.4388 200 0.5446
0.5966 1.5108 210 0.5385
0.7105 1.5827 220 0.5335
0.7294 1.6547 230 0.5281
0.6798 1.7266 240 0.5233
0.55 1.7986 250 0.5189
0.6927 1.8705 260 0.5155
0.4219 1.9424 270 0.5123
0.6465 2.0144 280 0.5106
0.5013 2.0863 290 0.5093
0.6224 2.1583 300 0.5056
0.5155 2.2302 310 0.5040
0.4718 2.3022 320 0.5006
0.7149 2.3741 330 0.4984
0.5848 2.4460 340 0.4974
0.6697 2.5180 350 0.4959
0.438 2.5899 360 0.4958
0.7051 2.6619 370 0.4952
0.5555 2.7338 380 0.4941
0.6686 2.8058 390 0.4934
0.6081 2.8777 400 0.4928
0.5672 2.9496 410 0.4924

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
60.5M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for osmanperviz/results

Finetuned
(48)
this model