--- language: - en - ja library_name: transformers license: llama3 pipeline_tag: text-generation tags: - mlx model_type: llama --- # mlx-community/Llama-3-Swallow-8B-v0.1-8bit The Model [mlx-community/Llama-3-Swallow-8B-v0.1-8bit](https://huggingface.co/mlx-community/Llama-3-Swallow-8B-v0.1-8bit) was converted to MLX format from [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) using mlx-lm version **0.18.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Llama-3-Swallow-8B-v0.1-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```