--- library_name: peft datasets: - HachiML/databricks-dolly-15k-ja-alpaca-format --- ## JGLUE Score I evaluated this model using the following JGLUE tasks. Here are the scores: | Task | stablelm-base-alpha-7b | This Model | stablelm-instruct-alpha-7b | |---------------------|:-----------------:|:----------:|:-----------------:| | JCOMMONSENSEQA(acc) | 33.42 | 79.17 | 82.22 | | JNLI(acc) | 43.34 | 47.82 | 52.05 | | MARC_JA(acc) | 96.73 | 88.14 | 82.88 | | JSQUAD(exact_match) | 70.62 | 29.85 | 63.26 | | **Average** | **61.03** | **61.25** | **70.10** | - Note: Use v0.3 prompt template - The JGLUE scores were measured using the following script: [Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable) - The JGLUE scores of Model "stablelm-base-alpha-7b" and "stablelm-instruct-alpha-7b" were referenced from Github above. ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0