Chanjun commited on
Commit
c301698
1 Parent(s): 9599e13

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -0
README.md CHANGED
@@ -51,6 +51,33 @@ pipeline_tag: text-generation
51
  {Assistant}
52
  ```
53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  ## Hardware and Software
55
 
56
  * **Hardware**: We utilized an A100x8 for training our model
 
51
  {Assistant}
52
  ```
53
 
54
+ ## Usage
55
+
56
+ - Tested on A100 80GB
57
+ - Our model can handle up to 10k input tokens, thanks to the `rope_scaling` option
58
+
59
+ ```python
60
+ import torch
61
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
62
+
63
+ tokenizer = AutoTokenizer.from_pretrained("upstage/llama-30b-instruct-2048")
64
+ model = AutoModelForCausalLM.from_pretrained(
65
+ "upstage/llama-30b-instruct-2048",
66
+ device_map="auto",
67
+ torch_dtype=torch.float16,
68
+ load_in_8bit=True,
69
+ rope_scaling={"type": "dynamic", "factor": 2} # allows handling of longer inputs
70
+ )
71
+
72
+ prompt = "### User:\nThomas is healthy, but he has to go to the hospital. What could be the reasons?\n\n### Assistant:\n"
73
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
74
+ del inputs["token_type_ids"]
75
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
76
+
77
+ output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
78
+ output_text = tokenizer.decode(output[0], skip_special_tokens=True)
79
+ ```
80
+
81
  ## Hardware and Software
82
 
83
  * **Hardware**: We utilized an A100x8 for training our model