Model Card for maliijaz/Finetuned_Qwen2
Model Details
Model Description
- Developed by: Muhammad Ali Ijaz
- Model type: [More Information Needed]
- Language(s) (NLP): Turkish & English
- License: Apache 2.0
- Finetuned from model [optional]: Qwen/Qwen2-7B-Instruct-GPTQ-Int8
Uses
To use the model use the following code:
Install Dependencies
- transformers
- peft
Imports
- from transformers import AutoModelForCausalLM, AutoTokenizer
- from peft import PeftModel
Usage
First take the base llm using the following code: base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-7B-Instruct-GPTQ-Int8", device_map="auto", trust_remote_code=False, revision="main")
Now load the PEFT model: model = PeftModel.from_pretrained(model=base_model, model_id="maliijaz/Finetuned_Qwen2")
To load the tokenizer, use the following code: tokenizer= AutoTokenizer.from_pretrained("maliijaz/Finetuned_Qwen2", use_fast=True)
Framework versions
- PEFT 0.11.1
- Downloads last month
- 3
Model tree for maliijaz/Finetuned_Qwen2
Base model
Qwen/Qwen2-7B
Finetuned
Qwen/Qwen2-7B-Instruct
Quantized
Qwen/Qwen2-7B-Instruct-GPTQ-Int8