π Llama-3-Open-Ko-Linear-8B-GGUF
Quantized by llama.cpp
ποΈ Merge Details
"I thought about it yesterdayβmerging the solid foundation of beomi/Llama-3-Open-Ko-8B with the specialized precision of beomi/Llama-3-Open-Ko-8B-Instruct-preview, using task arithmetic, is like composing a korean song that seamlessly blends timeless rhythms with contemporary solos, creating a harmonious masterpiece tailored to today's needs."
π°π· Merge Method
This model was merged using the task arithmetic merge method using beomi/Llama-3-Open-Ko-8B as a base.
π°π· Models Merged
The following models were included in the merge:
π Ollama
ollama create Llama-3-Open-Ko-Linear-8B -f ./Modelfile_Q5_K_M
Change it to suit your taste.
[Modelfile_Q5_K_M]
FROM llama-3-open-ko-linear-8b-Q5_K_M.gguf
TEMPLATE """
{{- if .System }}
system
<s>{{ .System }}</s>
{{- end }}
user
<s>Human:
{{ .Prompt }}</s>
assistant
<s>Assistant:
"""
SYSTEM """
μΉμ ν μ±λ΄μΌλ‘μ μλλ°©μ μμ²μ μ΅λν μμΈνκ³ μΉμ νκ² λ΅νμ. λͺ¨λ λλ΅μ νκ΅μ΄(Korean)μΌλ‘ λλ΅ν΄μ€.
"""
PARAMETER temperature 0.7
PARAMETER num_predict 3000
PARAMETER num_ctx 4096
PARAMETER stop "<s>"
PARAMETER stop "</s>"
PARAMETER top_k 50
PARAMETER top_p 0.95
πΎ Configuration
The following YAML configuration was used to produce this model:
models:
- layer_range: [0, 31]
model: beomi/Llama-3-Open-Ko-8B
parameters:
weight: 0.2
- layer_range: [0, 31]
model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
parameters:
weight: 0.8
merge_method: task_arithmetic
base_model: beomi/Llama-3-Open-Ko-8B
dtype: bfloat16
random_seed: 0
- Downloads last month
- 12