Edit model card

RakutenAI-7B-instruct

Model Description

RakutenAI-7B is a systematic initiative that brings the latest technologies to the world of Japanese LLMs. RakutenAI-7B achieves the best scores on the Japanese language understanding benchmarks while maintaining a competitive performance on the English test sets among similar models such as OpenCalm, Elyza, Youri, Nekomata and Swallow. RakutenAI-7B leverages the Mistral model architecture and is based on Mistral-7B-v0.1 pre-trained checkpoint, exemplifying a successful retrofitting of the pre-trained model weights. Moreover, we extend Mistral's vocabulary from 32k to 48k to offer a better character-per-token rate for Japanese.

The technical report can be accessed at arXiv.

If you are looking for a foundation model, check RakutenAI-7B.

If you are looking for a chat-tuned model, check RakutenAI-7B-chat.

Model Evaluation Results

Model Name 7-Avg. excl. XLSum-ja Avg. JCS JNLI MARC-ja JSQuAD Jaqket v2 XLSum-ja xWino MGSM
accuracy accuracy accuracy exact-match exact-match rouge-2 accuracy accuracy
3-shots 3-shots 3-shots 2-shots 1-shot 1-shot 0-shot 5-shots
rakuten-ai-7b-instruct 77.32 68.74 93.03 90.39 96.00 80.44 81.79 8.67 75.18 24.40
youri-7b-instruction 73.35 66.84 86.06 70.13 97.03 82.53 79.47 21.29 79.04 19.20
japanese-stablelm-instruct-gamma-7b 65.46 59.98 83.82 16.97 95.68 76.20 81.87 21.58 82.06 21.60
swallow-7b-instruct 64.29 58.25 83.38 26.50 94.46 75.62 81.01 16.01 76.23 12.80
elyza-japanese-Llama-2-7b-instruct 60.04 53.19 65.15 57.44 91.51 67.29 58.51 5.20 70.80 9.60
elyza-japanese-Llama-2-7b-fast-instruct 57.22 50.48 70.69 36.48 92.75 68.87 62.29 3.36 59.44 10.00
nekomata-7b-instruction 49.04 44.14 85.08 42.48 96.99 8.51 10.91 9.81 76.12 23.20
Table1: RakutenAI-7B-instruct model performance on Japanese LM-Harness metrics in comparison with other models.

Our model achieves the highest average score, more than 3 points ahead of the next best model. The models are sorted by 7-Avg. We use the following commit https://github.com/Stability-AI/lm-evaluation-harness/tree/0fa86429679f521161d5b81a94c0c385e0a0976d for Japanese LM-Harness with v0.3 prompt version.

Model Name Avg. ARC HellaSwag MMLU TruthfulQA
accuracy accuracy accuracy accuracy
25-shots 10-shots 5-shots 6-shots
rakuten-ai-7b-instruct 61.32 58.62 82.70 60.32 43.63
japanese-stablelm-instruct-gamma-7b 55.91 50.43 77.10 54.61 41.50
elyza-japanese-Llama-2-7b-fast-instruct 54.21 53.58 77.69 46.91 38.67
elyza-japanese-Llama-2-7b-instruct 54.07 52.05 78.33 47.09 38.83
nekomata-7b-instruction 52.84 50.34 73.67 48.53 38.81
youri-7b-instruction 52.11 48.98 75.66 45.41 38.38
swallow-7b-instruct 50.32 47.61 72.27 40.77 40.62
Table2: RakutenAI-7B-instruct model performance on English LM-Harness metrics in comparison with other models.

Our model achieves the highest average score, more than 5 points ahead of the next best model. We use the following commit for English LM-Harness https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463.

An independent evaluation by Kamata et.al. for Nejumi LLMリーダーボード Neo using a weighted average of llm-jp-eval and Japanese MT-bench also confirms the highest performance of chat/instruct versions of RakutenAI-7B among Open LLMs of similar sizes, with a score of 0.393/0.331 respectively, as of 22nd March 2024.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "Rakuten/RakutenAI-7B-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype="auto", device_map="auto")
model.eval()

requests = [
    "「馬が合う」はどう言う意味ですか",
    "How to make an authentic Spanish Omelette?",
]

system_message = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user_input} ASSISTANT:"

for req in requests:
    input_req = system_message.format(user_input=req)
    input_ids = tokenizer.encode(input_req, return_tensors="pt").to(device=model.device)
    tokens = model.generate(
        input_ids,
        max_new_tokens=1024,
        do_sample=True,
        pad_token_id=tokenizer.eos_token_id,
    )
    out = tokenizer.decode(tokens[0][len(input_ids[0]):], skip_special_tokens=True)
    print("USER:\n" + req)
    print("ASSISTANT:\n" + out)
    print()
    print()

Model Details

  • Developed by: Rakuten Group, Inc.
  • Language(s): Japanese, English
  • License: This model is licensed under Apache License, Version 2.0.
  • Instruction-Tuning Dataset: We fine-tune our foundation model to create RakutenAI-7B-instruct and RakutenAI-7B-chat using a mix of open source and internally hand-crafted datasets. We use train part of the following datasets (CC by-SA License) for instruction-tuned and chat-tuned models:

Limitations and Bias

The suite of RakutenAI-7B models is capable of generating human-like text on a wide range of topics. However, like all LLMs, they have limitations and can produce biased, inaccurate, or unsafe outputs. Please exercise caution and judgement while interacting with them.

Citation

For citing our work on the suite of RakutenAI-7B models, please use:

@misc{rakutengroup2024rakutenai7b,
      title={RakutenAI-7B: Extending Large Language Models for Japanese}, 
      author={{Rakuten Group, Inc.} and Aaron Levine and Connie Huang and Chenguang Wang and Eduardo Batista and Ewa Szymanska and Hongyi Ding and Hou Wei Chou and Jean-François Pessiot and Johanes Effendi and Justin Chiu and Kai Torben Ohlhus and Karan Chopra and Keiji Shinzato and Koji Murakami and Lee Xiong and Lei Chen and Maki Kubota and Maksim Tkachenko and Miroku Lee and Naoki Takahashi and Prathyusha Jwalapuram and Ryutaro Tatsushima and Saurabh Jain and Sunil Kumar Yadav and Ting Cai and Wei-Te Chen and Yandi Xia and Yuki Nakayama and Yutaka Higashiyama},
      year={2024},
      eprint={2403.15484},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
1,404
Safetensors
Model size
7.37B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Rakuten/RakutenAI-7B-instruct

Finetunes
4 models
Merges
1 model
Quantizations
5 models