File size: 1,456 Bytes
2490670
 
bef1577
 
2490670
87d597b
 
40bec8e
 
 
 
 
 
 
87d597b
 
 
40bec8e
87d597b
 
2490670
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bef1577
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
library_name: peft
datasets:
- HachiML/databricks-dolly-15k-ja-for-peft
---
## JGLUE Score
I evaluated this model using the following JGLUE tasks. Here are the scores:
| Task                | stablelm-base-alpha-7b | This Model | stablelm-instruct-alpha-7b |
|---------------------|:-----------------:|:----------:|:-----------------:|
| JCOMMONSENSEQA(acc) | 33.42             | 79.17      | 82.22             |
| JNLI(acc)           | 43.34             | 47.82      | 52.05             |
| MARC_JA(acc)        | 96.73             | 88.14      | 82.88             |
| JSQUAD(exact_match) | 70.62             | 29.85      | 63.26             |
| **Average**         | **61.03**         | **61.25**  | **70.10**         |
- Note: Use v0.3 prompt template
- The JGLUE scores were measured using the following script:
[Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable)
- The JGLUE scores of Model "stablelm-base-alpha-7b" and "stablelm-instruct-alpha-7b" were referenced from Github above.


## Training procedure


The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions


- PEFT 0.4.0