Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ pip install mlx-lm
|
|
25 |
You can use `mlx-lm` from the command line. For example:
|
26 |
|
27 |
```
|
28 |
-
|
29 |
```
|
30 |
|
31 |
This will download a Mistral 7B model from the Hugging Face Hub and generate
|
@@ -34,19 +34,19 @@ text using the given prompt.
|
|
34 |
For a full list of options run:
|
35 |
|
36 |
```
|
37 |
-
|
38 |
```
|
39 |
|
40 |
To quantize a model from the command line run:
|
41 |
|
42 |
```
|
43 |
-
|
44 |
```
|
45 |
|
46 |
For more options run:
|
47 |
|
48 |
```
|
49 |
-
|
50 |
```
|
51 |
|
52 |
You can upload new models to Hugging Face by specifying `--upload-repo` to
|
@@ -54,8 +54,8 @@ You can upload new models to Hugging Face by specifying `--upload-repo` to
|
|
54 |
[MLX Hugging Face community](https://huggingface.co/mlx-community) you can do:
|
55 |
|
56 |
```
|
57 |
-
|
58 |
-
--hf-path mistralai/Mistral-7B-Instruct-v0.
|
59 |
-q \
|
60 |
--upload-repo mlx-community/my-4bit-mistral
|
61 |
```
|
|
|
25 |
You can use `mlx-lm` from the command line. For example:
|
26 |
|
27 |
```
|
28 |
+
mlx_lm.generate --model mlx-community/Mistral-7B-Instruct-v0.3-4bit --prompt "hello"
|
29 |
```
|
30 |
|
31 |
This will download a Mistral 7B model from the Hugging Face Hub and generate
|
|
|
34 |
For a full list of options run:
|
35 |
|
36 |
```
|
37 |
+
mlx_lm.generate --help
|
38 |
```
|
39 |
|
40 |
To quantize a model from the command line run:
|
41 |
|
42 |
```
|
43 |
+
mlx_lm.convert --hf-path mistralai/Mistral-7B-Instruct-v0.3 -q
|
44 |
```
|
45 |
|
46 |
For more options run:
|
47 |
|
48 |
```
|
49 |
+
mlx_lm.convert --help
|
50 |
```
|
51 |
|
52 |
You can upload new models to Hugging Face by specifying `--upload-repo` to
|
|
|
54 |
[MLX Hugging Face community](https://huggingface.co/mlx-community) you can do:
|
55 |
|
56 |
```
|
57 |
+
mlx_lm.convert \
|
58 |
+
--hf-path mistralai/Mistral-7B-Instruct-v0.3 \
|
59 |
-q \
|
60 |
--upload-repo mlx-community/my-4bit-mistral
|
61 |
```
|