metadata
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- 4bit
- AWQ
- AutoAWQ
- 7b
- quantized
- Mistral
- Mistral-7B
Model Card for alokabhishek/Mistral-7B-Instruct-v0.2-4bit-AWQ
This repo contains 4-bit quantized (using AutoAWQ) model of Mistral AI_'s Mistral-7B-Instruct-v0.2
AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration is developed by MIT-HAN-Lab
Model Details
- Model creator: Mistral AI_
- Original model: Mistral-7B-Instruct-v0.2
About 4 bit quantization using AutoAWQ
- AutoAWQ github repo: AutoAWQ github repo
- MIT-han-lab llm-aws github repo: MIT-han-lab llm-aws github repo
@inproceedings{lin2023awq, title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration}, author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Chen, Wei-Ming and Wang, Wei-Chen and Xiao, Guangxuan and Dang, Xingyu and Gan, Chuang and Han, Song}, booktitle={MLSys}, year={2024} }
How to Get Started with the Model
Use the code below to get started with the model.
How to run from Python code
First install the package
!pip install autoawq
!pip install accelerate
Import
import torch
import os
from torch import bfloat16
from huggingface_hub import login, HfApi, create_repo
from transformers import AutoTokenizer, pipeline
from awq import AutoAWQForCausalLM
Use a pipeline as a high-level helper
# define the model ID
model_id_llama = "alokabhishek/Mistral-7B-Instruct-v0.2-4bit-AWQ"
# Load model
tokenizer_llama = AutoTokenizer.from_pretrained(model_id_llama, use_fast=True)
model_llama = AutoAWQForCausalLM.from_quantized(model_id_llama, fuse_layer=True, trust_remote_code = False, safetensors = True)
# Set up the prompt and prompt template. Change instruction as per requirements.
prompt_llama = "Tell me a funny joke about Large Language Models meeting a Blackhole in an intergalactic Bar."
fromatted_prompt = f'''<s> [INST] You are a helpful, and fun loving assistant. Always answer as jestfully as possible.[/INST] </s> [INST] {prompt_llama}[/INST]'''
tokens = tokenizer_llama(fromatted_prompt, return_tensors="pt").input_ids.cuda()
# Generate output, adjust parameters as per requirements
generation_output = model_llama.generate(tokens, do_sample=True, temperature=1.7, top_p=0.95, top_k=40, max_new_tokens=512)
# Print the output
print(tokenizer_llama.decode(generation_output[0], skip_special_tokens=True))
Uses
Direct Use
[More Information Needed]
Downstream Use [optional]
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]