Update README.md
Browse files
README.md
CHANGED
@@ -55,7 +55,7 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
55 |
```
|
56 |
|
57 |
# Notes:
|
58 |
-
- For small datasets with narrow content which the model already
|
59 |
- Fine-tuned lora with rank = 16 and alpha = 32, epoch = 1, linear (optim)
|
60 |
- DoRA
|
61 |
|
|
|
55 |
```
|
56 |
|
57 |
# Notes:
|
58 |
+
- For small datasets with narrow content which the model has already done well on our domain, and doesn't want the model to forget the knowledge => Just need to focus on q, o.
|
59 |
- Fine-tuned lora with rank = 16 and alpha = 32, epoch = 1, linear (optim)
|
60 |
- DoRA
|
61 |
|