reciprocate commited on
Commit
67dc96c
1 Parent(s): 4bdc987

Rename README.md to update(code): fix tokenizer loading

Browse files
README.md → update(code): fix tokenizer loading RENAMED
@@ -31,11 +31,9 @@ license: other
31
  `StableLM 2 Zephyr 1.6B` uses the following instruction format:
32
  ```
33
  <|user|>
34
- List 3 synonyms for the word "tiny"<|endoftext|>
35
  <|assistant|>
36
- 1. Dwarf
37
- 2. Little
38
- 3. Petite<|endoftext|>
39
  ```
40
 
41
  This format is also available through the tokenizer's `apply_chat_template` method:
@@ -43,14 +41,14 @@ This format is also available through the tokenizer's `apply_chat_template` meth
43
  ```python
44
  from transformers import AutoModelForCausalLM, AutoTokenizer
45
 
46
- tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-zephyr-3b')
47
  model = AutoModelForCausalLM.from_pretrained(
48
  'stabilityai/stablelm-2-zephyr-1_6b',
49
  trust_remote_code=True,
50
  device_map="auto"
51
  )
52
 
53
- prompt = [{'role': 'user', 'content': 'List 3 synonyms for the word "tiny"'}]
54
  inputs = tokenizer.apply_chat_template(
55
  prompt,
56
  add_generation_prompt=True,
@@ -60,7 +58,7 @@ inputs = tokenizer.apply_chat_template(
60
  tokens = model.generate(
61
  inputs.to(model.device),
62
  max_new_tokens=1024,
63
- temperature=0.8,
64
  do_sample=True
65
  )
66
 
 
31
  `StableLM 2 Zephyr 1.6B` uses the following instruction format:
32
  ```
33
  <|user|>
34
+ Which famous math number begins with 1.6 ...?<|endoftext|>
35
  <|assistant|>
36
+ The number you are referring to is 1.618033988749895. This is the famous value known as the golden ratio<|endoftext|>
 
 
37
  ```
38
 
39
  This format is also available through the tokenizer's `apply_chat_template` method:
 
41
  ```python
42
  from transformers import AutoModelForCausalLM, AutoTokenizer
43
 
44
+ tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-zephyr-1_6b', trust_remote_code=True)
45
  model = AutoModelForCausalLM.from_pretrained(
46
  'stabilityai/stablelm-2-zephyr-1_6b',
47
  trust_remote_code=True,
48
  device_map="auto"
49
  )
50
 
51
+ prompt = [{'role': 'user', 'content': 'Which famous math number begins with 1.6 ...?'}]
52
  inputs = tokenizer.apply_chat_template(
53
  prompt,
54
  add_generation_prompt=True,
 
58
  tokens = model.generate(
59
  inputs.to(model.device),
60
  max_new_tokens=1024,
61
+ temperature=0.5,
62
  do_sample=True
63
  )
64