Update README.md
#15
by
acryl-min
- opened
README.md
CHANGED
@@ -24,6 +24,8 @@ A-LLM is a Korean large language model built on Meta's Llama-3-8B architecture
|
|
24 |
The model was trained using the DoRA (Weight-Decomposed Low-Rank Adaptation) methodology on a comprehensive Korean dataset
|
25 |
, achieving state-of-the-art performance among open-source Korean language models.
|
26 |
|
|
|
|
|
27 |
|
28 |
## Performance Benchmarks
|
29 |
### Horangi Korean LLM Leaderboard
|
|
|
24 |
The model was trained using the DoRA (Weight-Decomposed Low-Rank Adaptation) methodology on a comprehensive Korean dataset
|
25 |
, achieving state-of-the-art performance among open-source Korean language models.
|
26 |
|
27 |
+
![leaderboard_screenshot](./horangi.png)
|
28 |
+
|
29 |
|
30 |
## Performance Benchmarks
|
31 |
### Horangi Korean LLM Leaderboard
|