digo-prayudha
commited on
Commit
•
636127d
1
Parent(s):
54d69eb
Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,7 @@ tags:
|
|
8 |
- trl
|
9 |
- sft
|
10 |
- generated_from_trainer
|
|
|
11 |
model-index:
|
12 |
- name: Llama-3.2-1B-Indonesian
|
13 |
results: []
|
@@ -21,7 +22,11 @@ should probably proofread and complete it, then remove this comment. -->
|
|
21 |
|
22 |
# Llama-3.2-1B-Indonesian
|
23 |
|
24 |
-
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) that has been optimized for Indonesian language understanding and generation
|
|
|
|
|
|
|
|
|
25 |
|
26 |
## Training and evaluation data
|
27 |
|
|
|
8 |
- trl
|
9 |
- sft
|
10 |
- generated_from_trainer
|
11 |
+
- lora
|
12 |
model-index:
|
13 |
- name: Llama-3.2-1B-Indonesian
|
14 |
results: []
|
|
|
22 |
|
23 |
# Llama-3.2-1B-Indonesian
|
24 |
|
25 |
+
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) that has been optimized for Indonesian language understanding and generation.
|
26 |
+
<br>
|
27 |
+
<br>
|
28 |
+
The fine-tuning process utilized Low-Rank Adaptation (LoRA) to efficiently adapt the model while minimizing computational and storage overhead. This approach enables effective fine-tuning for specific tasks or domains, particularly in the Indonesian language context.
|
29 |
+
|
30 |
|
31 |
## Training and evaluation data
|
32 |
|