asiansoul's picture
Update README.md
1420cb4 verified
metadata
license: llama3.2
language:
  - ko
  - en
base_model:
  - meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation

Pexels 이미지

Merged below called "gx thinking Groove Feeling X-mas"

There is no such thing as a flawless system. It's about using it appropriately and reasonably without pushing it to its limits.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = 'asiansoul/llama-3.2-koen-merged-3b-instruct'

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
instruction = "μ² μˆ˜κ°€ 20개의 연필을 가지고 μžˆμ—ˆλŠ”λ° μ˜ν¬κ°€ μ ˆλ°˜μ„ κ°€μ Έκ°€κ³  λ―Όμˆ˜κ°€ 남은 5개λ₯Ό κ°€μ Έκ°”μœΌλ©΄ μ² μˆ˜μ—κ²Œ 남은 μ—°ν•„μ˜ κ°―μˆ˜λŠ” λͺ‡κ°œμΈκ°€μš”?"

messages = [
    {"role": "user", "content": f"{instruction}"}
    ]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.convert_tokens_to_ids("<|end_of_text|>"),
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = model.generate(
    input_ids,
    max_new_tokens=1024,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9
)



μ² μˆ˜κ°€ 20개의 연필을 가지고 μžˆμ—ˆκ³ , μ˜ν¬κ°€ 절반(20/2 = 10)을 κ°€μ Έκ°”μŠ΅λ‹ˆλ‹€. λ”°λΌμ„œ μ² μˆ˜κ°€ 남은 μ—°ν•„μ˜ κ°―μˆ˜λŠ” 20 - 10 = 10μž…λ‹ˆλ‹€.

λ―Όμˆ˜κ°€ 남은 5개λ₯Ό κ°€μ Έκ°”μœΌλ‹ˆ, μ² μˆ˜κ°€ 남은 μ—°ν•„μ˜ κ°―μˆ˜λŠ” 10 - 5 = 5μž…λ‹ˆλ‹€.

λ”°λΌμ„œ μ² μˆ˜κ°€ 남은 μ—°ν•„μ˜ κ°―μˆ˜λŠ” 5κ°œμž…λ‹ˆλ‹€.
@article{Llama3.2KoEnMerged3BInstruct,
  title={asiansoul/llama-3.2-koen-merged-3b-instruct-GGUF Card},
  author={Asiansoul},
  merged={Asiansoul},
  year={2024},
  url = {https://huggingface.co/asiansoul/llama-3.2-koen-merged-3b-instruct-GGUF}
}