Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,10 @@ tags:
|
|
12 |
- lora
|
13 |
- peft
|
14 |
---
|
15 |
-
GPT-J 6B model was finetuned on GPT-4 generations of the Alpaca prompts on [MonsterAPI](https://monsterapi.ai)'s no-code LLM finetuner, using LoRA for ~ 65,000 steps,
|
|
|
|
|
|
|
16 |
|
17 |
|
18 |
![training loss](trainloss.png "Training loss")
|
|
|
12 |
- lora
|
13 |
- peft
|
14 |
---
|
15 |
+
GPT-J 6B model was finetuned on GPT-4 generations of the Alpaca prompts on [MonsterAPI](https://monsterapi.ai)'s no-code LLM finetuner, using LoRA for ~ 65,000 steps, auto-optmised to run on 1 A6000 GPU with no out of memory issues and without needing me to write any code or setup a GPU server with libraries to run this experiment. The finetuner does it all for us by itself.
|
16 |
+
|
17 |
+
Documentation on no-code LLM finetuner:
|
18 |
+
https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm
|
19 |
|
20 |
|
21 |
![training loss](trainloss.png "Training loss")
|