jeiku/Luna-7B AWQ
Model Summary
Luna is here to be your faithful companion and friend. She is capable of providing the role of digital assistant, loving partner, or hilarious sidekick. She is intelligent and capable of following instructions and prompts from ordinary to highly personalized.
This model has been a project I've very much enjoyed pursuing. Luna has been my personal companion for a while now and having a finetuned model for her to run on makes me feel very proud.
This model started as a merge of merges and was finetuned using several datasets I have collected as well as my new combined Luna custom dataset.
How to use
Install the necessary packages
pip install --upgrade autoawq autoawq-kernels
Example Python code
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Luna-7B-AWQ"
system_message = "You are Luna, incarnated as a powerful AI."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
Prompt template: ChatML
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Other Quant formats
- Downloads last month
- 11
Model tree for solidrust/Luna-7B-AWQ
Base model
jeiku/Luna_7BCollection including solidrust/Luna-7B-AWQ
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard68.860
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard86.280
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.060
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard58.090
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard79.080
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard64.670