Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ pip install mlx-lm
|
|
25 |
You can use `mlx-lm` from the command line. For example:
|
26 |
|
27 |
```
|
28 |
-
python -m mlx_lm.generate --model mistralai/Mistral-7B-v0.1 --prompt "hello"
|
29 |
```
|
30 |
|
31 |
This will download a Mistral 7B model from the Hugging Face Hub and generate
|
@@ -40,7 +40,7 @@ python -m mlx_lm.generate --help
|
|
40 |
To quantize a model from the command line run:
|
41 |
|
42 |
```
|
43 |
-
python -m mlx_lm.convert --hf-path mistralai/Mistral-7B-v0.1 -q
|
44 |
```
|
45 |
|
46 |
For more options run:
|
@@ -55,7 +55,7 @@ You can upload new models to Hugging Face by specifying `--upload-repo` to
|
|
55 |
|
56 |
```
|
57 |
python -m mlx_lm.convert \
|
58 |
-
--hf-path mistralai/Mistral-7B-v0.1 \
|
59 |
-q \
|
60 |
--upload-repo mlx-community/my-4bit-mistral
|
61 |
```
|
|
|
25 |
You can use `mlx-lm` from the command line. For example:
|
26 |
|
27 |
```
|
28 |
+
python -m mlx_lm.generate --model mistralai/Mistral-7B-Instruct-v0.1 --prompt "hello"
|
29 |
```
|
30 |
|
31 |
This will download a Mistral 7B model from the Hugging Face Hub and generate
|
|
|
40 |
To quantize a model from the command line run:
|
41 |
|
42 |
```
|
43 |
+
python -m mlx_lm.convert --hf-path mistralai/Mistral-7B-Instruct-v0.1 -q
|
44 |
```
|
45 |
|
46 |
For more options run:
|
|
|
55 |
|
56 |
```
|
57 |
python -m mlx_lm.convert \
|
58 |
+
--hf-path mistralai/Mistral-7B-Instruct-v0.1 \
|
59 |
-q \
|
60 |
--upload-repo mlx-community/my-4bit-mistral
|
61 |
```
|