Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,6 @@ datasets:
|
|
5 |
---
|
6 |
## JGLUE Score
|
7 |
We evaluated our model using the following JGLUE tasks. Here are the scores:
|
8 |
-
|
9 |
| Task | Score |
|
10 |
|----------------|---------:|
|
11 |
| JSQUAD(exact_match) | 62.83 |
|
@@ -13,8 +12,8 @@ We evaluated our model using the following JGLUE tasks. Here are the scores:
|
|
13 |
| JNLI(acc) | 50.69 |
|
14 |
| MARC_JA(acc) | 79.64 |
|
15 |
| **Average** | **67.23** |
|
16 |
-
|
17 |
-
The JGLUE scores were measured using the following script:
|
18 |
[Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable)
|
19 |
|
20 |
## Training procedure
|
|
|
5 |
---
|
6 |
## JGLUE Score
|
7 |
We evaluated our model using the following JGLUE tasks. Here are the scores:
|
|
|
8 |
| Task | Score |
|
9 |
|----------------|---------:|
|
10 |
| JSQUAD(exact_match) | 62.83 |
|
|
|
12 |
| JNLI(acc) | 50.69 |
|
13 |
| MARC_JA(acc) | 79.64 |
|
14 |
| **Average** | **67.23** |
|
15 |
+
- Note: Use v0.3 prompt template
|
16 |
+
- The JGLUE scores were measured using the following script:
|
17 |
[Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable)
|
18 |
|
19 |
## Training procedure
|