matteocap's picture
Upload 11 files
a9898fb verified
|
raw
history blame
895 Bytes
metadata
language:
  - en
license: apache-2.0
tags:
  - mlx
base_model: alpindale/Mistral-7B-v0.2-hf
datasets:
  - cognitivecomputations/dolphin
  - cognitivecomputations/dolphin-coder
  - cognitivecomputations/samantha-data
  - jondurbin/airoboros-2.2.1
  - teknium/openhermes-2.5
  - m-a-p/Code-Feedback
  - m-a-p/CodeFeedback-Filtered-Instruction

mlx_community/dolphin-2.8-mistral-7b-v02-Q_8-mlx

This model was converted to MLX format from cognitivecomputations/dolphin-2.8-mistral-7b-v02 using mlx-lm version 0.5.0. Refer to the original model card for more details on the model.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx_community/dolphin-2.8-mistral-7b-v02-Q_8-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)