--- language: - ja license: apache-2.0 tags: - text-generation-inference - code - transformers - trl - qwen2 datasets: - sakusakumura/databricks-dolly-15k-ja-scored - nu-dialogue/jmultiwoz - kunishou/amenokaku-code-instruct - HachiML/alpaca_jp_python base_model: Qwen/Qwen2.5-Coder-7B-Instruct --- # Uploaded model - **Developed by:** taoki - **License:** apache-2.0 - **Finetuned from model :** Qwen/Qwen2.5-Coder-7B-Instruct # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "taoki/Qwen2.5-Coder-7B-Instruct_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "クイックソートのアルゴリズムを書いて" messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) response = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)[0] print(response) ``` # Output ```` <|im_start|>system You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|> <|im_start|>user クイックソートのアルゴリズムを書いて<|im_end|> <|im_start|>assistant ```python def quicksort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quicksort(left) + middle + quicksort(right) print(quicksort([3,6,8,10,1,2,1])) ```<|im_end|> ````