|
--- |
|
language: |
|
- ru |
|
tags: |
|
- llama-cpp |
|
- gguf-my-repo |
|
base_model: t-tech/T-pro-it-1.0 |
|
--- |
|
# al-x/T-pro-it-1.0-Q3_K_M-GGUF |
|
This model was converted to GGUF format from [`t-tech/T-pro-it-1.0`](https://huggingface.co/t-tech/T-pro-it-1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. |
|
Refer to the [original model card](https://huggingface.co/t-tech/T-pro-it-1.0) for more details on the model. |
|
|
|
## Ollama |
|
```bash |
|
ollama run hf.co/al-x/T-pro-it-1.0-Q3_K_M-GGUF:Q3_K_M |
|
``` |
|
|