ken0x0a's picture
Upload folder using huggingface_hub (#1)
cc3ae76 verified
metadata
base_model: rinna/llama-3-youko-8b-instruct
datasets:
  - CohereForAI/aya_dataset
  - kunishou/databricks-dolly-15k-ja
  - kunishou/HelpSteer-35k-ja
  - kunishou/HelpSteer2-20k-ja
  - kunishou/hh-rlhf-49k-ja
  - kunishou/oasst1-chat-44k-ja
  - kunishou/oasst2-chat-68k-ja
  - meta-math/MetaMathQA
  - OpenAssistant/oasst1
  - OpenAssistant/oasst2
  - sahil2801/CodeAlpaca-20k
language:
  - ja
  - en
license: llama3
tags:
  - llama
  - llama-3
  - mlx
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
inference: false
base_model_relation: merge

mlx-community/rinna-llama-3-youko-8b-instruct-4bit

The Model mlx-community/rinna-llama-3-youko-8b-instruct-4bit was converted to MLX format from rinna/llama-3-youko-8b-instruct using mlx-lm version 0.19.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/rinna-llama-3-youko-8b-instruct-4bit")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)