Update README.md
Browse files
README.md
CHANGED
@@ -56,7 +56,8 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
56 |
|
57 |
# Notes:
|
58 |
- For small datasets with narrow content which the model already has well, and doesn't want the model to forget the knowledge by focusing on o.
|
59 |
-
- Fine-tuned lora with rank = 16 and alpha = 32, epoch = 1
|
60 |
-
|
|
|
61 |
# Improvement
|
62 |
- Increasing rank can help the model do better at robust structure.
|
|
|
56 |
|
57 |
# Notes:
|
58 |
- For small datasets with narrow content which the model already has well, and doesn't want the model to forget the knowledge by focusing on o.
|
59 |
+
- Fine-tuned lora with rank = 16 and alpha = 32, epoch = 1, linear (optim)
|
60 |
+
- DoRA
|
61 |
+
|
62 |
# Improvement
|
63 |
- Increasing rank can help the model do better at robust structure.
|