Update README.md
Browse files
README.md
CHANGED
@@ -24,6 +24,15 @@ This should (hopefully) make it quite capable with Golang coding tasks.
|
|
24 |
|
25 |
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/630fff3f02ce39336c495fe9/5R1WZ9hvqX4XTKws-FaJ3.jpeg)
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
## Training
|
28 |
|
29 |
I trained this model (based on Llama 3.1 8b) on a merged dataset I created consisting of 50,627 rows, 13.3M input tokens and 2.2M output tokens.
|
|
|
24 |
|
25 |
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/630fff3f02ce39336c495fe9/5R1WZ9hvqX4XTKws-FaJ3.jpeg)
|
26 |
|
27 |
+
## LoRA
|
28 |
+
|
29 |
+
- [FP16](https://huggingface.co/smcleod/llama-3-1-8b-smcleod-golang-coder-v2/tree/main/llama-3-1-8b-smcleod-golang-coder-v2-lora-fp16)
|
30 |
+
- [BF16](https://huggingface.co/smcleod/llama-3-1-8b-smcleod-golang-coder-v2/tree/main/llama-3-1-8b-smcleod-golang-coder-v2-lora-bf16)
|
31 |
+
|
32 |
+
## GGUF
|
33 |
+
|
34 |
+
Coming soon...
|
35 |
+
|
36 |
## Training
|
37 |
|
38 |
I trained this model (based on Llama 3.1 8b) on a merged dataset I created consisting of 50,627 rows, 13.3M input tokens and 2.2M output tokens.
|