mlx-community/idefics2-8b-chatty-4bit
This model was converted to MLX format from HuggingFaceM4/idefics2-8b-chatty
using mlx-vlm version 0.1.0.
Refer to the original model card for more details on the model.
Use with mlx
pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/idefics2-8b-chatty-4bit --max-tokens 100 --temp 0.0
- Downloads last month
- 5
Inference API (serverless) does not yet support mlx models for this pipeline type.