evgmaslov commited on
Commit
6ffc9e2
1 Parent(s): 44f0059

Model save

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -14,8 +14,8 @@ model-index:
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/evg_maslov/leap71/runs/x06pvik0)
18
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/evg_maslov/leap71/runs/x06pvik0)
19
  # Mistral-Nemo-Instruct-2407-cars
20
 
21
  This model is a fine-tuned version of [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) on an unknown dataset.
@@ -46,8 +46,8 @@ The following hyperparameters were used during training:
46
  - total_train_batch_size: 16
47
  - total_eval_batch_size: 16
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
- - lr_scheduler_type: cosine
50
- - lr_scheduler_warmup_ratio: 0.01
51
  - num_epochs: 1
52
 
53
  ### Framework versions
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/evg_maslov/leap71/runs/uzt6gty5)
18
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/evg_maslov/leap71/runs/uzt6gty5)
19
  # Mistral-Nemo-Instruct-2407-cars
20
 
21
  This model is a fine-tuned version of [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) on an unknown dataset.
 
46
  - total_train_batch_size: 16
47
  - total_eval_batch_size: 16
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - lr_scheduler_warmup_ratio: 0.001
51
  - num_epochs: 1
52
 
53
  ### Framework versions