jartine commited on
Commit
4c943d2
1 Parent(s): 7b728ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +332 -0
README.md ADDED
@@ -0,0 +1,332 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - de
6
+ - es
7
+ - it
8
+ - pt
9
+ - ru
10
+ - zh
11
+ - ja
12
+ license: apache-2.0
13
+ license_link: LICENSE
14
+ quantized_by: jartine
15
+ prompt_template: |
16
+ [INST] {{prompt}} [/INST]
17
+ tags:
18
+ - llamafile
19
+ ---
20
+
21
+ # Mistral Nemo Instruct 2407 - llamafile
22
+
23
+ - Model creator: [Mistral AI](https://huggingface.co/mistralai/)
24
+ - Original model: [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)
25
+
26
+ The model is packaged into executable weights, which we call
27
+ [llamafiles](https://github.com/Mozilla-Ocho/llamafile). This makes it
28
+ easy to use the model on Linux, MacOS, Windows, FreeBSD, OpenBSD, and
29
+ NetBSD for AMD64 and ARM64.
30
+
31
+ ## Quickstart
32
+
33
+ Running the following on a desktop OS will launch a tab in your web
34
+ browser with a chatbot interface.
35
+
36
+ ```
37
+ wget https://huggingface.co/Mozilla/Mistral-Nemo-Instruct-2407-llamafile/resolve/main/Mistral-Nemo-Instruct-2407.Q6_K.llamafile
38
+ chmod +x Mistral-Nemo-Instruct-2407.Q6_K.llamafile
39
+ ./Mistral-Nemo-Instruct-2407.Q6_K.llamafile
40
+ ```
41
+
42
+ This model has a max context window size of 128k tokens. By default, a
43
+ context window size of 8192 tokens is used. You may increase this to the
44
+ maximum by passing the `-c 0` flag.
45
+
46
+ On GPUs with sufficient RAM, the `-ngl 999` flag may be passed to use
47
+ the system's NVIDIA or AMD GPU(s). On Windows, only the graphics card
48
+ driver needs to be installed. If the prebuilt DSOs should fail, the CUDA
49
+ or ROCm SDKs may need to be installed, in which case llamafile builds a
50
+ native module just for your system.
51
+
52
+ For further information, please see the [llamafile
53
+ README](https://github.com/mozilla-ocho/llamafile/).
54
+
55
+ Having **trouble?** See the ["Gotchas"
56
+ section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas)
57
+ of the README.
58
+
59
+ ## Prompting
60
+
61
+ Mistral models work well with the default settings of the llamafile
62
+ server GUI. You shouldn't need to specify a custom prompt template.
63
+
64
+ Here's an example of how to prompt Mistral on the command line:
65
+
66
+ ```
67
+ ./Mistral-Nemo-Instruct-2407.Q6_K.llamafile -p '[INST]The Belobog Academy has discovered a new, invasive species of algae that can double itself in one day, and in 30 days fills a whole reservoir - contaminating the water supply. How many days would it take for the algae to fill half of the reservoir?[/INST]'
68
+ ```
69
+
70
+ ## About llamafile
71
+
72
+ llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023.
73
+ It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
74
+ binaries that run on the stock installs of six OSes for both ARM64 and
75
+ AMD64.
76
+
77
+ ---
78
+
79
+ # Model Card for Mistral-Nemo-Instruct-2407
80
+
81
+ The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407). Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
82
+
83
+ For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-nemo/).
84
+
85
+ ## Key features
86
+ - Released under the **Apache 2 License**
87
+ - Pre-trained and instructed versions
88
+ - Trained with a **128k context window**
89
+ - Trained on a large proportion of **multilingual and code data**
90
+ - Drop-in replacement of Mistral 7B
91
+
92
+ ## Model Architecture
93
+ Mistral Nemo is a transformer model, with the following architecture choices:
94
+ - **Layers:** 40
95
+ - **Dim:** 5,120
96
+ - **Head dim:** 128
97
+ - **Hidden dim:** 14,336
98
+ - **Activation Function:** SwiGLU
99
+ - **Number of heads:** 32
100
+ - **Number of kv-heads:** 8 (GQA)
101
+ - **Vocabulary size:** 2**17 ~= 128k
102
+ - **Rotary embeddings (theta = 1M)**
103
+
104
+ ## Metrics
105
+
106
+ ### Main Benchmarks
107
+
108
+ | Benchmark | Score |
109
+ | --- | --- |
110
+ | HellaSwag (0-shot) | 83.5% |
111
+ | Winogrande (0-shot) | 76.8% |
112
+ | OpenBookQA (0-shot) | 60.6% |
113
+ | CommonSenseQA (0-shot) | 70.4% |
114
+ | TruthfulQA (0-shot) | 50.3% |
115
+ | MMLU (5-shot) | 68.0% |
116
+ | TriviaQA (5-shot) | 73.8% |
117
+ | NaturalQuestions (5-shot) | 31.2% |
118
+
119
+ ### Multilingual Benchmarks (MMLU)
120
+
121
+ | Language | Score |
122
+ | --- | --- |
123
+ | French | 62.3% |
124
+ | German | 62.7% |
125
+ | Spanish | 64.6% |
126
+ | Italian | 61.3% |
127
+ | Portuguese | 63.3% |
128
+ | Russian | 59.2% |
129
+ | Chinese | 59.0% |
130
+ | Japanese | 59.0% |
131
+
132
+ ## Usage
133
+
134
+ The model can be used with three different frameworks
135
+
136
+ - [`mistral_inference`](https://github.com/mistralai/mistral-inference): See [here](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407#mistral-inference)
137
+ - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
138
+ - [`NeMo`](https://github.com/NVIDIA/NeMo): See [nvidia/Mistral-NeMo-12B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct)
139
+
140
+ ### Mistral Inference
141
+
142
+ #### Install
143
+
144
+ It is recommended to use `mistralai/Mistral-Nemo-Instruct-2407` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
145
+
146
+ ```
147
+ pip install mistral_inference
148
+ ```
149
+
150
+ #### Download
151
+
152
+ ```py
153
+ from huggingface_hub import snapshot_download
154
+ from pathlib import Path
155
+
156
+ mistral_models_path = Path.home().joinpath('mistral_models', 'Nemo-Instruct')
157
+ mistral_models_path.mkdir(parents=True, exist_ok=True)
158
+
159
+ snapshot_download(repo_id="mistralai/Mistral-Nemo-Instruct-2407", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
160
+ ```
161
+
162
+ #### Chat
163
+
164
+ After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
165
+
166
+ ```
167
+ mistral-chat $HOME/mistral_models/Nemo-Instruct --instruct --max_tokens 256 --temperature 0.35
168
+ ```
169
+
170
+ *E.g.* Try out something like:
171
+ ```
172
+ How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar.
173
+ ```
174
+
175
+ #### Instruct following
176
+
177
+ ```py
178
+ from mistral_inference.transformer import Transformer
179
+ from mistral_inference.generate import generate
180
+
181
+ from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
182
+ from mistral_common.protocol.instruct.messages import UserMessage
183
+ from mistral_common.protocol.instruct.request import ChatCompletionRequest
184
+
185
+ tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
186
+ model = Transformer.from_folder(mistral_models_path)
187
+
188
+ prompt = "How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar."
189
+
190
+ completion_request = ChatCompletionRequest(messages=[UserMessage(content=prompt)])
191
+
192
+ tokens = tokenizer.encode_chat_completion(completion_request).tokens
193
+
194
+ out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
195
+ result = tokenizer.decode(out_tokens[0])
196
+
197
+ print(result)
198
+ ```
199
+
200
+ #### Function calling
201
+
202
+ ```py
203
+ from mistral_common.protocol.instruct.tool_calls import Function, Tool
204
+ from mistral_inference.transformer import Transformer
205
+ from mistral_inference.generate import generate
206
+
207
+ from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
208
+ from mistral_common.protocol.instruct.messages import UserMessage
209
+ from mistral_common.protocol.instruct.request import ChatCompletionRequest
210
+
211
+
212
+ tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
213
+ model = Transformer.from_folder(mistral_models_path)
214
+
215
+ completion_request = ChatCompletionRequest(
216
+ tools=[
217
+ Tool(
218
+ function=Function(
219
+ name="get_current_weather",
220
+ description="Get the current weather",
221
+ parameters={
222
+ "type": "object",
223
+ "properties": {
224
+ "location": {
225
+ "type": "string",
226
+ "description": "The city and state, e.g. San Francisco, CA",
227
+ },
228
+ "format": {
229
+ "type": "string",
230
+ "enum": ["celsius", "fahrenheit"],
231
+ "description": "The temperature unit to use. Infer this from the users location.",
232
+ },
233
+ },
234
+ "required": ["location", "format"],
235
+ },
236
+ )
237
+ )
238
+ ],
239
+ messages=[
240
+ UserMessage(content="What's the weather like today in Paris?"),
241
+ ],
242
+ )
243
+
244
+ tokens = tokenizer.encode_chat_completion(completion_request).tokens
245
+
246
+ out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
247
+ result = tokenizer.decode(out_tokens[0])
248
+
249
+ print(result)
250
+ ```
251
+
252
+ ### Transformers
253
+
254
+ > [!IMPORTANT]
255
+ > NOTE: Until a new release has been made, you need to install transformers from source:
256
+ > ```sh
257
+ > pip install git+https://github.com/huggingface/transformers.git
258
+ > ```
259
+
260
+ If you want to use Hugging Face `transformers` to generate text, you can do something like this.
261
+
262
+ ```py
263
+ from transformers import pipeline
264
+
265
+ messages = [
266
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
267
+ {"role": "user", "content": "Who are you?"},
268
+ ]
269
+ chatbot = pipeline("text-generation", model="mistralai/Mistral-Nemo-Instruct-2407",max_new_tokens=128)
270
+ chatbot(messages)
271
+ ```
272
+
273
+ ## Function calling with `transformers`
274
+
275
+ To use this example, you'll need `transformers` version 4.42.0 or higher. Please see the
276
+ [function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling)
277
+ in the `transformers` docs for more information.
278
+
279
+ ```python
280
+ from transformers import AutoModelForCausalLM, AutoTokenizer
281
+ import torch
282
+
283
+ model_id = "mistralai/Mistral-Nemo-Instruct-2407"
284
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
285
+
286
+ def get_current_weather(location: str, format: str):
287
+ """
288
+ Get the current weather
289
+
290
+ Args:
291
+ location: The city and state, e.g. San Francisco, CA
292
+ format: The temperature unit to use. Infer this from the users location. (choices: ["celsius", "fahrenheit"])
293
+ """
294
+ pass
295
+
296
+ conversation = [{"role": "user", "content": "What's the weather like in Paris?"}]
297
+ tools = [get_current_weather]
298
+
299
+ # render the tool use prompt as a string:
300
+ tool_use_prompt = tokenizer.apply_chat_template(
301
+ conversation,
302
+ tools=tools,
303
+ tokenize=False,
304
+ add_generation_prompt=True,
305
+ )
306
+
307
+ inputs = tokenizer(tool_use_prompt, return_tensors="pt")
308
+
309
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
310
+
311
+ outputs = model.generate(**inputs, max_new_tokens=1000)
312
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
313
+ ```
314
+
315
+ Note that, for reasons of space, this example does not show a complete cycle of calling a tool and adding the tool call and tool
316
+ results to the chat history so that the model can use them in its next generation. For a full tool calling example, please
317
+ see the [function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling),
318
+ and note that Mistral **does** use tool call IDs, so these must be included in your tool calls and tool results. They should be
319
+ exactly 9 alphanumeric characters.
320
+
321
+ > [!TIP]
322
+ > Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3.
323
+
324
+ ## Limitations
325
+
326
+ The Mistral Nemo Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
327
+ It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
328
+ make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
329
+
330
+ ## The Mistral AI Team
331
+
332
+ Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall