wassemgtk nbroad HF staff commited on
Commit
b306faa
1 Parent(s): 8abc496

add text-generation-inference example (#1)

Browse files

- add text-generation-inference example (c8da07f32d96d661eb096dc8919f1038d9e5dcd1)


Co-authored-by: Nicholas Broad <nbroad@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +9 -0
README.md CHANGED
@@ -99,6 +99,15 @@ tokenizer = AutoTokenizer.from_pretrained(
99
 
100
  ```
101
 
 
 
 
 
 
 
 
 
 
102
  ### Limitations and Biases
103
 
104
  Palmyra Large’s core functionality is to take a string of text and predict the next token. While language models are widely used for other tasks, there are many unknowns in this work. When prompting Palmyra Large, keep in mind that the next statistically likely token is not always the token that produces the most "accurate" text. Never rely on Palmyra Large to produce factually correct results.
 
99
 
100
  ```
101
 
102
+ It can also be used with text-generation-inference
103
+
104
+ ```sh
105
+ model=Writer/palmyra-large
106
+ volume=$PWD/data
107
+
108
+ docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference --model-id $model
109
+ ```
110
+
111
  ### Limitations and Biases
112
 
113
  Palmyra Large’s core functionality is to take a string of text and predict the next token. While language models are widely used for other tasks, there are many unknowns in this work. When prompting Palmyra Large, keep in mind that the next statistically likely token is not always the token that produces the most "accurate" text. Never rely on Palmyra Large to produce factually correct results.