UsernameJustAnother
commited on
Commit
•
22471ce
1
Parent(s):
4f4f91a
Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,10 @@ tags:
|
|
9 |
- unsloth
|
10 |
- mistral
|
11 |
- trl
|
|
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
# Uploaded model
|
@@ -17,6 +21,61 @@ tags:
|
|
17 |
- **License:** apache-2.0
|
18 |
- **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
21 |
|
22 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
9 |
- unsloth
|
10 |
- mistral
|
11 |
- trl
|
12 |
+
- rp
|
13 |
+
- gguf
|
14 |
+
- experimental
|
15 |
+
- long-context
|
16 |
---
|
17 |
|
18 |
# Uploaded model
|
|
|
21 |
- **License:** apache-2.0
|
22 |
- **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407
|
23 |
|
24 |
+
Standard disclaimer: This is me teaching myself the basics of fine-tuning, with notes extensively borrowed from https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9
|
25 |
+
|
26 |
+
New for v6:
|
27 |
+
- Slightly different source mix. Down to 8,000 records of mostly-human convos and stories, curated by me, trained in ChatML.
|
28 |
+
- The stories have been edited to remove author's notes, and the RP chats tweaked to remove many ministrations.
|
29 |
+
- Different learning rate and back to Celeste's scaling factor setup (but Celeste trained on -base, this is -instruct).
|
30 |
+
- Now with added eval! I worked out how to get eval stats (and wandb) set up, so now I can see my failures in graphical form.
|
31 |
+
|
32 |
+
And of course yay Unsloth for letting this all train on a single A100 with variable (wildly variable) context length.
|
33 |
+
|
34 |
+
It was trained with the following settings:
|
35 |
+
|
36 |
+
```
|
37 |
+
|
38 |
+
model = FastLanguageModel.get_peft_model(
|
39 |
+
model,
|
40 |
+
r = 256,
|
41 |
+
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
|
42 |
+
"gate_proj", "up_proj", "down_proj",],
|
43 |
+
lora_alpha = 128, # 128 / sqrt(256) gives a scaling factor of 8
|
44 |
+
lora_dropout = 0.1, # Supports any, but = 0 is optimized
|
45 |
+
bias = "none", # Supports any, but = "none" is optimized
|
46 |
+
# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
|
47 |
+
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
|
48 |
+
random_state = 3407,
|
49 |
+
use_rslora = True, # setting the adapter scaling factor to lora_alpha/math.sqrt(r) instead of lora_alpha/r
|
50 |
+
loftq_config = None, # And LoftQ
|
51 |
+
)
|
52 |
+
|
53 |
+
lr_scheduler_kwargs = {
|
54 |
+
'min_lr': 0.0000024 # Adjust this value as needed
|
55 |
+
}
|
56 |
+
|
57 |
+
per_device_train_batch_size = 2,
|
58 |
+
per_device_eval_batch_size = 2, # defaults to 8!
|
59 |
+
gradient_accumulation_steps = 4,
|
60 |
+
eval_accumulation_steps = 4,
|
61 |
+
prediction_loss_only = True, # When performing evaluation and generating predictions, only returns the loss.
|
62 |
+
warmup_steps = 50,
|
63 |
+
num_train_epochs = 2, # For longer training runs! 12 hrs/epoch?
|
64 |
+
learning_rate = 1e-5, # 8e-5 used by Celeste, 0.0001 is from the paper, halving it. tried 5e-5, now 1e-5.
|
65 |
+
fp16 = not is_bfloat16_supported(),
|
66 |
+
bf16 = is_bfloat16_supported(),
|
67 |
+
fp16_full_eval = True, # stops eval from trying to use fp32
|
68 |
+
eval_strategy = "steps", # 'no', 'steps', 'epoch'. Don't use this without an eval dataset etc
|
69 |
+
eval_steps = 100, # is eval_strat is set to 'steps', do every N steps.
|
70 |
+
logging_steps = 5, # so eval and logging happen on the same schedule
|
71 |
+
optim = "adamw_8bit", #
|
72 |
+
weight_decay = 0, # up from 0
|
73 |
+
lr_scheduler_type = "cosine_with_min_lr", # linear, cosine, cosine_with_min_lr, default linear
|
74 |
+
lr_scheduler_kwargs = lr_scheduler_kwargs, # needed for cosine_with_min_lr
|
75 |
+
seed = 3407,
|
76 |
+
|
77 |
+
```
|
78 |
+
|
79 |
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
80 |
|
81 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|