MLX
llama
hurongliang's picture
Upload folder using huggingface_hub
8bb08ea verified
|
raw
history blame
1.24 kB
metadata
license: other
tags:
  - mlx
datasets:
  - ai2_arc
  - unalignment/spicy-3.1
  - codeparrot/apps
  - facebook/belebele
  - boolq
  - jondurbin/cinematika-v0.1
  - drop
  - lmsys/lmsys-chat-1m
  - TIGER-Lab/MathInstruct
  - cais/mmlu
  - Muennighoff/natural-instructions
  - openbookqa
  - piqa
  - Vezora/Tested-22k-Python-Alpaca
  - cakiki/rosetta-code
  - Open-Orca/SlimOrca
  - spider
  - squad_v2
  - migtissera/Synthia-v1.3
  - datasets/winogrande
  - nvidia/HelpSteer
  - Intel/orca_dpo_pairs
  - unalignment/toxic-dpo-v0.1
  - jondurbin/truthy-dpo-v0.1
  - allenai/ultrafeedback_binarized_cleaned
  - Squish42/bluemoon-fandom-1-1-rp-cleaned
  - LDJnr/Capybara
  - JULIELab/EmoBank
  - kingbri/PIPPA-shareGPT
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE

mlx-community/bagel-dpo-34b-v0.2-4bit-mlx

This model was converted to MLX format from jondurbin/bagel-dpo-34b-v0.2. Refer to the original model card for more details on the model.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/bagel-dpo-34b-v0.2-4bit-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)