reasoning
Collection
4 items
•
Updated
llm-jp/llm-jp-3-3.7b-instructをCoTデータでファインチューニングすることで作成したreasoningモデルです。
学習にはQwen2.5-32B-Instruct-AWQを使って生成した合成データセットを使用しています。.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda"
model = AutoModelForCausalLM.from_pretrained(
'Kendamarron/llm-jp-3-3.7b-o1-v0.1',
torch_dtype=torch.bfloat16,
device_map=device,
)
tokenizer = AutoTokenizer.from_pretrained('Kendamarron/llm-jp-3-3.7b-o1-v0.1')
messages = [
{"role": "system", "content": "あなたは優秀で論理的なアシスタントです。まずは<Thought></Thought>タグの中であなたの思考の過程を記載し、<Output></Output>タグの中に最終的にユーザーに提供する出力を記載します。"},
{"role": "user", "content": "1から10までの整数を足すと?"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=256,
do_sample=True,
top_p=0.95,
top_k=40,
temperature=0.7,
repetition_penalty=1.1,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=2
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
### model
model_name_or_path: llm-jp/llm-jp-3-3.7b-instruct
### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json
### dataset
dataset: cot_normal, cot_math
template: alpaca_ja
cutoff_len: 8192
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llm_jp/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 8
gradient_accumulation_steps: 4
learning_rate: 1.0e-5
num_train_epochs: 2.0
lr_scheduler_type: cosine
optim: adamw_bnb_8bit
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
### logging
report_to: wandb