|
--- |
|
base_model: |
|
- mistralai/Ministral-8B-Instruct-2410 |
|
--- |
|
This model the 3-bit quantized version of the [ministral-8B](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) by Mistral-AI.Please follow the following instruction to run the model on your device: |
|
|
|
There are multiple ways to infer the model. Firstly, let's install `llama.cpp` and use it for the inference |
|
|
|
1. Install |
|
``` |
|
git clone https://github.com/ggerganov/llama.cpp |
|
!mkdir llama.cpp/build && cd llama.cpp/build && cmake .. && cmake --build . --config Release |
|
``` |
|
|
|
2. Inference |
|
``` |
|
./llama.cpp/build/bin/llama-cli -m ./ministral-8b_Q3_K_M.gguf -cnv -p "You are a helpful assistant" |
|
``` |
|
|
|
Here, you can interact with model from your terminal. |
|
|
|
|
|
**Alternatively**, we can use python binding of the `llama.cpp` to run the model on both CPU and GPU. |
|
1. Install |
|
``` |
|
pip install --no-cache-dir llama-cpp-python==0.2.85 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu122 |
|
``` |
|
|
|
2. Inference on CPU |
|
``` |
|
from llama_cpp import Llama |
|
|
|
model_path = "./ministral-8b_Q3_K_M.gguf" |
|
llm = Llama(model_path=model_path, n_threads=8, verbose=False) |
|
|
|
prompt = "What should I do when my eyes are dry?" |
|
output = llm( |
|
prompt=f"<|user|>\n{prompt}<|end|>\n<|assistant|>", |
|
max_tokens=4096, |
|
stop=["<|end|>"], |
|
echo=False, # Whether to echo the prompt |
|
) |
|
print(output) |
|
``` |
|
|
|
3. Inference on GPU |
|
``` |
|
from llama_cpp import Llama |
|
|
|
model_path = "./ministral-8b_Q3_K_M.gguf" |
|
llm = Llama(model_path=model_path, n_threads=8, n_gpu_layers=-1, verbose=False) |
|
|
|
prompt = "What should I do when my eyes are dry?" |
|
output = llm( |
|
prompt=f"<|user|>\n{prompt}<|end|>\n<|assistant|>", |
|
max_tokens=4096, |
|
stop=["<|end|>"], |
|
echo=False, # Whether to echo the prompt |
|
) |
|
print(output) |
|
``` |