File size: 2,240 Bytes
b0bc822 1420cb4 b0bc822 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
license: llama3.2
language:
- ko
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
---
![Pexels μ΄λ―Έμ§](https://images.pexels.com/photos/14541507/pexels-photo-14541507.jpeg)
## Merged below called "gx thinking Groove Feeling X-mas"
- [Meta Llama](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
- [Bllossom Korean Llama](https://huggingface.co/Bllossom/llama-3.2-Korean-Bllossom-3B)
- [Carrot AI Rabbit Llama](https://huggingface.co/CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct)
There is no such thing as a flawless system. It's about using it appropriately and reasonably without pushing it to its limits.
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = 'asiansoul/llama-3.2-koen-merged-3b-instruct'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
instruction = "μ² μκ° 20κ°μ μ°νμ κ°μ§κ³ μμλλ° μν¬κ° μ λ°μ κ°μ Έκ°κ³ λ―Όμκ° λ¨μ 5κ°λ₯Ό κ°μ Έκ°μΌλ©΄ μ² μμκ² λ¨μ μ°νμ κ°―μλ λͺκ°μΈκ°μ?"
messages = [
{"role": "user", "content": f"{instruction}"}
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.convert_tokens_to_ids("<|end_of_text|>"),
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=1024,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9
)
μ² μκ° 20κ°μ μ°νμ κ°μ§κ³ μμκ³ , μν¬κ° μ λ°(20/2 = 10)μ κ°μ Έκ°μ΅λλ€. λ°λΌμ μ² μκ° λ¨μ μ°νμ κ°―μλ 20 - 10 = 10μ
λλ€.
λ―Όμκ° λ¨μ 5κ°λ₯Ό κ°μ Έκ°μΌλ, μ² μκ° λ¨μ μ°νμ κ°―μλ 10 - 5 = 5μ
λλ€.
λ°λΌμ μ² μκ° λ¨μ μ°νμ κ°―μλ 5κ°μ
λλ€.
```
```
@article{Llama3.2KoEnMerged3BInstruct,
title={asiansoul/llama-3.2-koen-merged-3b-instruct-GGUF Card},
author={Asiansoul},
merged={Asiansoul},
year={2024},
url = {https://huggingface.co/asiansoul/llama-3.2-koen-merged-3b-instruct-GGUF}
}
``` |