bobtk's picture
79fbd73e5ef75afb3874ee6728081a79dadb8da415adffe983cd84abe7a085e1
988343b verified
|
raw
history blame
765 Bytes
metadata
language:
  - en
  - ja
license: llama3
library_name: transformers
tags:
  - mlx
pipeline_tag: text-generation
model_type: llama

mlx-community/Llama-3-Swallow-70B-Instruct-v0.1-4bit

The Model mlx-community/Llama-3-Swallow-70B-Instruct-v0.1-4bit was converted to MLX format from tokyotech-llm/Llama-3-Swallow-70B-Instruct-v0.1 using mlx-lm version 0.13.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/Llama-3-Swallow-70B-Instruct-v0.1-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)