YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
AVA-Qwen1.5-7
Fine-Tuned Qwen1.5 7B Persian Large Language Model LLM / Persian Qwen1.5 7B
AVA-Qwen1.5 / Persian Qwen
This Repository Contains Documents for Fine-Tuned Qwen1.5 Persian Large Language Model(LLM) Called AVA-Qwen1.5
(Still in progress)
Dataset used:
To Be Done
Usage:
All models are hosted in HuggingFace, and here is the code for inference:
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
model_name_or_id = "MehdiHosseiniMoghadam/AVA-Qwen1.5-7B-Chat"
model = AutoModelForCausalLM.from_pretrained(model_name_or_id, torch_dtype=torch.float16, device_map="auto", low_cpu_mem_usage=True, load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_id)
prompt = ''
prompt = f"### Human:{prompt}\n### Assistant:"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generation_config = GenerationConfig(
do_sample=True,
top_k=1,
temperature=0.01,
max_new_tokens=90,
pad_token_id=tokenizer.eos_token_id
)
outputs = model.generate(**inputs, generation_config=generation_config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
License
Released Jan 30, 2024 by Mehdi Hosseini Moghadam
Attention ⚠️: The user is responsible for using AVA-Llama-3 / Persian Llama 3
Any misuse of the model (of any kind) is the responsibility of the user and not the creator
Contact
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.