--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - alpaca base_model: unsloth/Qwen2-7b-bnb-4bit datasets: - yahma/alpaca-cleaned --- # Uploaded model This Model is trained with alpaca dataset max_seq_length = 8192 dtype = None load_in_4bit = False warmup_steps = 10, max_steps = 70, learning_rate = 2e-5, fp16 = not is_bfloat16_supported(), bf16 = is_bfloat16_supported(), logging_steps = 1, optim = "adamw_8bit", weight_decay = 0.01, lr_scheduler_type = "linear", seed = 3407, output_dir = "outputs", - **Developed by:** Kaan35 - **License:** apache-2.0 - **Finetuned from model :** QWEN-2 [](https://github.com/unslothai/unsloth)