mlx-community/quantized-gemma-7b-it

This model was converted to MLX format from google/gemma-7b-it. Refer to the original model card for more details on the model.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/quantized-gemma-7b-it")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Downloads last month
13
Safetensors
Model size
2B params
Tensor type
FP16
·
U32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using mlx-community/quantized-gemma-7b-it 1