Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,20 @@ library_name: peft
|
|
3 |
datasets:
|
4 |
- HachiML/databricks-dolly-15k-ja-for-peft
|
5 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
## Training procedure
|
7 |
|
8 |
|
|
|
3 |
datasets:
|
4 |
- HachiML/databricks-dolly-15k-ja-for-peft
|
5 |
---
|
6 |
+
## JGLUE Score
|
7 |
+
I evaluated this model using the following JGLUE tasks. Here are the scores:
|
8 |
+
| Task | Llama-2-13b-hf(*) | This Model |
|
9 |
+
|---------------------|:-----------------:|:----------:|
|
10 |
+
| JCOMMONSENSEQA(acc) | 75.06 | 79.17 |
|
11 |
+
| JNLI(acc) | 22.18 | 47.82 |
|
12 |
+
| MARC_JA(acc) | 38.83 | 88.14 |
|
13 |
+
| JSQUAD(exact_match) | 76.13 | 29.85 |
|
14 |
+
| **Average** | **53.05** | **61.25** |
|
15 |
+
- Note: Use v0.3 prompt template
|
16 |
+
- The JGLUE scores were measured using the following script:
|
17 |
+
[Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable)
|
18 |
+
|
19 |
+
|
20 |
## Training procedure
|
21 |
|
22 |
|