fine-tuning
#1
by
Sigmally
- opened
Could you explain how you performed the fine-tuning, which tools or code were used, how long the entire process took, and on how many GPUs it was conducted?
I used my own code, which can be found on my GitHub (https://github.com/Locutusque/TinyMistral-train-eval). It was fully fine-tuned without LoRA on a single GPU, and took around 2 days to finish. If you curious, I used the Titan V to train it.