--- library_name: peft datasets: - HachiML/databricks-dolly-15k-ja-for-peft language: - en - ja --- ## JGLUE Score I evaluated this model using the following JGLUE tasks. Here are the scores: | Task | Llama-2-13b-hf(*) | This Model | |---------------------|:-----------------:|:----------:| | JCOMMONSENSEQA(acc) | 75.06 | 75.78 | | JNLI(acc) | 22.18 | 50.69 | | MARC_JA(acc) | 38.83 | 79.64 | | JSQUAD(exact_match) | 76.13 | 62.83 | | **Average** | **53.05** | **67.23** | - Note: Use v0.3 prompt template - The JGLUE scores were measured using the following script: [Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable) - (*) A similar method was used to measure these scores. ## How to use ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, AutoTokenizer from peft import PeftModel model_name = "meta-llama/Llama-2-13b-hf" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16, ) tokenizer = AutoTokenizer.from_pretrained(model_name) pt_model = AutoModelForCausalLM.from_pretrained( model_name, quantization_config=bnb_config, ) peft_name = "HachiML/Llama-2-13b-hf-qlora-dolly-ja-2ep" model = PeftModel.from_pretrained( pt_model, peft_name, ) ``` ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0