study-hjt commited on
Commit
f02496e
1 Parent(s): 7710fe0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -59,15 +59,15 @@ KeyError: 'qwen2'
59
  Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
60
 
61
  ```python
62
- from modelscope import AutoModelForCausalLM, AutoTokenizer
63
  device = "cuda" # the device to load the model onto
64
 
65
  model = AutoModelForCausalLM.from_pretrained(
66
- "huangjintao/Qwen1.5-32B-Chat-GPTQ-Int8",
67
  torch_dtype="auto",
68
  device_map="auto"
69
  )
70
- tokenizer = AutoTokenizer.from_pretrained("huangjintao/Qwen1.5-32B-Chat-GPTQ-Int8")
71
 
72
  prompt = "Give me a short introduction to large language model."
73
  messages = [
 
59
  Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
60
 
61
  ```python
62
+ from transformers import AutoModelForCausalLM, AutoTokenizer
63
  device = "cuda" # the device to load the model onto
64
 
65
  model = AutoModelForCausalLM.from_pretrained(
66
+ "study-hjt/Qwen1.5-32B-Chat-GPTQ-Int8",
67
  torch_dtype="auto",
68
  device_map="auto"
69
  )
70
+ tokenizer = AutoTokenizer.from_pretrained("study-hjt/Qwen1.5-32B-Chat-GPTQ-Int8")
71
 
72
  prompt = "Give me a short introduction to large language model."
73
  messages = [