File size: 2,240 Bytes
b0bc822
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1420cb4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b0bc822
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
license: llama3.2
language:
- ko
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
---

![Pexels 이미지](https://images.pexels.com/photos/14541507/pexels-photo-14541507.jpeg)


## Merged below called "gx thinking Groove Feeling X-mas"
- [Meta Llama](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
- [Bllossom Korean Llama](https://huggingface.co/Bllossom/llama-3.2-Korean-Bllossom-3B)
- [Carrot AI Rabbit Llama](https://huggingface.co/CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct)

There is no such thing as a flawless system. It's about using it appropriately and reasonably without pushing it to its limits.

```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = 'asiansoul/llama-3.2-koen-merged-3b-instruct'

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
instruction = "μ² μˆ˜κ°€ 20개의 연필을 가지고 μžˆμ—ˆλŠ”λ° μ˜ν¬κ°€ μ ˆλ°˜μ„ κ°€μ Έκ°€κ³  λ―Όμˆ˜κ°€ 남은 5개λ₯Ό κ°€μ Έκ°”μœΌλ©΄ μ² μˆ˜μ—κ²Œ 남은 μ—°ν•„μ˜ κ°―μˆ˜λŠ” λͺ‡κ°œμΈκ°€μš”?"

messages = [
    {"role": "user", "content": f"{instruction}"}
    ]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.convert_tokens_to_ids("<|end_of_text|>"),
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = model.generate(
    input_ids,
    max_new_tokens=1024,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9
)



μ² μˆ˜κ°€ 20개의 연필을 가지고 μžˆμ—ˆκ³ , μ˜ν¬κ°€ 절반(20/2 = 10)을 κ°€μ Έκ°”μŠ΅λ‹ˆλ‹€. λ”°λΌμ„œ μ² μˆ˜κ°€ 남은 μ—°ν•„μ˜ κ°―μˆ˜λŠ” 20 - 10 = 10μž…λ‹ˆλ‹€.

λ―Όμˆ˜κ°€ 남은 5개λ₯Ό κ°€μ Έκ°”μœΌλ‹ˆ, μ² μˆ˜κ°€ 남은 μ—°ν•„μ˜ κ°―μˆ˜λŠ” 10 - 5 = 5μž…λ‹ˆλ‹€.

λ”°λΌμ„œ μ² μˆ˜κ°€ 남은 μ—°ν•„μ˜ κ°―μˆ˜λŠ” 5κ°œμž…λ‹ˆλ‹€.
```

```
@article{Llama3.2KoEnMerged3BInstruct,
  title={asiansoul/llama-3.2-koen-merged-3b-instruct-GGUF Card},
  author={Asiansoul},
  merged={Asiansoul},
  year={2024},
  url = {https://huggingface.co/asiansoul/llama-3.2-koen-merged-3b-instruct-GGUF}
}
```