mpt-7b-8k-chat-gptq / README.md
casperhansen's picture
GPTQ quantized MPT model
081aec9
|
raw
history blame
2.17 kB
metadata
license: cc-by-nc-sa-4.0
datasets:
  - camel-ai/code
  - ehartford/wizard_vicuna_70k_unfiltered
  - anon8231489123/ShareGPT_Vicuna_unfiltered
  - teknium1/GPTeacher/roleplay-instruct-v2-final
  - teknium1/GPTeacher/codegen-isntruct
  - timdettmers/openassistant-guanaco
  - camel-ai/math
  - project-baize/baize-chatbot/medical_chat_data
  - project-baize/baize-chatbot/quora_chat_data
  - project-baize/baize-chatbot/stackoverflow_chat_data
  - camel-ai/biology
  - camel-ai/chemistry
  - camel-ai/ai_society
  - jondurbin/airoboros-gpt4-1.2
  - LongConversations
  - camel-ai/physics
tags:
  - Composer
  - MosaicML
  - llm-foundry
inference: false

MPT-7B-Chat-8k

License: CC-By-NC-SA-4.0 (non-commercial use only)

Model Date

July 18, 2023

Model License

CC-By-NC-SA-4.0 (non-commercial use only)

Documentation

How to Use

You need auto-gptq installed to run the following: pip install auto-gptq

Example script:

from auto_gptq import AutoGPTQForCausalLM
from transformers import AutoTokenizer, TextGenerationPipeline, TextStreamer

quantized_model = "casperhansen/mpt-7b-8k-chat-gptq"

print('loading model...')

# load quantized model to the first GPU
tokenizer = AutoTokenizer.from_pretrained(quantized_model, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(quantized_model, device="cuda:0", trust_remote_code=True)

prompt_format = """<|im_start|>system
A conversation between a user and an LLM-based AI assistant. The assistant gives helpful and honest answers.<|im_end|>
<|im_start|>user
{text}<|im_end|>
<|im_start|>assistant

"""

prompt = prompt_format.format(text="What is the difference between nuclear fusion and fission?")

print('generating...')

# or you can also use pipeline
streamer = TextStreamer(tokenizer, skip_special_tokens=True)
tokens = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**tokens, max_length=512, streamer=streamer)