Spaces:
Running
Running
Commit
•
31ef570
1
Parent(s):
2ac7bd0
Update documentation on local inference
Browse files
README.md
CHANGED
@@ -53,10 +53,25 @@ Both the example above use the HF Inference API or HF Endpoints API.
|
|
53 |
|
54 |
If you want to run the model locally, you need to run this inference server locally: https://github.com/huggingface/text-generation-inference
|
55 |
|
56 |
-
And add this to your `.env.local
|
57 |
|
58 |
```
|
59 |
-
MODELS=`[{
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
```
|
61 |
|
62 |
## Building
|
|
|
53 |
|
54 |
If you want to run the model locally, you need to run this inference server locally: https://github.com/huggingface/text-generation-inference
|
55 |
|
56 |
+
And add this to your `.env.local`, feel free to adjust/remove the parameters and the preprompt:
|
57 |
|
58 |
```
|
59 |
+
MODELS=`[{
|
60 |
+
"name": "...",
|
61 |
+
"endpoints": [{"url": "http://127.0.0.1:8080/generate_stream"}],
|
62 |
+
"userMessageToken": "<|prompter|>",
|
63 |
+
"assistantMessageToken": "<|assistant|>",
|
64 |
+
"messageEndToken": "</s>",
|
65 |
+
"preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n",
|
66 |
+
"parameters": {
|
67 |
+
"temperature": 0.9,
|
68 |
+
"top_p": 0.95,
|
69 |
+
"repetition_penalty": 1.2,
|
70 |
+
"top_k": 50,
|
71 |
+
"truncate": 1000,
|
72 |
+
"max_new_tokens": 1000
|
73 |
+
}
|
74 |
+
}]`
|
75 |
```
|
76 |
|
77 |
## Building
|