Edit model card

llm2vec-Qwen2-0.5B-shopping-Quar-v2

This model is a fine-tuned version of Qwen/Qwen2-0.5B on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.1151
  • Accuracy: 0.4539

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
3.4292 0.0510 1000 3.3647 0.4207
3.3217 0.1019 2000 3.2997 0.4283
3.274 0.1529 3000 3.2483 0.4345
3.2467 0.2038 4000 3.1933 0.4407
3.2093 0.2548 5000 3.1931 0.4439
3.1807 0.3057 6000 3.1661 0.4476
3.1978 0.3567 7000 3.1433 0.4486
3.161 0.4076 8000 3.1091 0.4539
3.1554 0.4586 9000 3.1044 0.4529
3.1405 0.5095 10000 3.1151 0.4539

Framework versions

  • Transformers 4.43.4
  • Pytorch 1.12.0+cu102
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for youssefkhalil320/llm2vec-Qwen2-0.5B-shopping-Quar-v2

Base model

Qwen/Qwen2-0.5B
Finetuned
(43)
this model