language: | |
- en | |
- ja | |
license: llama3 | |
library_name: transformers | |
tags: | |
- mlx | |
pipeline_tag: text-generation | |
model_type: llama | |
# mlx-community/Llama-3-Swallow-70B-Instruct-v0.1-4bit | |
The Model [mlx-community/Llama-3-Swallow-70B-Instruct-v0.1-4bit](https://huggingface.co/mlx-community/Llama-3-Swallow-70B-Instruct-v0.1-4bit) was converted to MLX format from [tokyotech-llm/Llama-3-Swallow-70B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-70B-Instruct-v0.1) using mlx-lm version **0.13.1**. | |
## Use with mlx | |
```bash | |
pip install mlx-lm | |
``` | |
```python | |
from mlx_lm import load, generate | |
model, tokenizer = load("mlx-community/Llama-3-Swallow-70B-Instruct-v0.1-4bit") | |
response = generate(model, tokenizer, prompt="hello", verbose=True) | |
``` | |