bert-base-cased_12112024T103442

This model is a fine-tuned version of google-bert/bert-base-cased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4204
  • F1: 0.8800
  • Learning Rate: 0.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 600
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss F1 Rate
No log 0.9942 86 1.8033 0.0562 0.0000
No log 2.0 173 1.6866 0.2347 0.0000
No log 2.9942 259 1.5021 0.4551 0.0000
No log 4.0 346 1.2315 0.5317 0.0000
No log 4.9942 432 1.0796 0.5664 0.0000
1.4663 6.0 519 0.9279 0.6285 0.0000
1.4663 6.9942 605 0.8522 0.6722 1e-05
1.4663 8.0 692 0.7117 0.7331 0.0000
1.4663 8.9942 778 0.6128 0.7896 0.0000
1.4663 10.0 865 0.5323 0.8263 0.0000
1.4663 10.9942 951 0.5330 0.8196 0.0000
0.6158 12.0 1038 0.4660 0.8616 0.0000
0.6158 12.9942 1124 0.4204 0.8800 0.0000
0.6158 14.0 1211 0.4407 0.8770 0.0000
0.6158 14.9942 1297 0.4435 0.8780 0.0000
0.6158 16.0 1384 0.4412 0.8791 0.0000
0.6158 16.9942 1470 0.4424 0.8802 0.0000
0.1869 18.0 1557 0.4466 0.8809 5e-07
0.1869 18.9942 1643 0.4469 0.8795 1e-07
0.1869 19.8844 1720 0.4483 0.8798 0.0

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.19.1
Downloads last month
16
Safetensors
Model size
108M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Imkaran/bert-base-cased_12112024T103442

Finetuned
(1988)
this model