Update README.md
Browse files
README.md
CHANGED
@@ -9,9 +9,11 @@ We evaluated our model using the following JGLUE tasks. Here are the scores:
|
|
9 |
| Task | Score |
|
10 |
|----------------|---------:|
|
11 |
| JSQUAD(exact_match) | 62.83 |
|
12 |
-
| JCOMMONSENSEQA(acc) |
|
13 |
-
| JNLI(acc) |
|
14 |
| MARC_JA(acc) | - |
|
|
|
|
|
15 |
|
16 |
The JGLUE scores were measured using the following script:
|
17 |
[Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable)
|
|
|
9 |
| Task | Score |
|
10 |
|----------------|---------:|
|
11 |
| JSQUAD(exact_match) | 62.83 |
|
12 |
+
| JCOMMONSENSEQA(acc) | 75.78 |
|
13 |
+
| JNLI(acc) | 50.69 |
|
14 |
| MARC_JA(acc) | - |
|
15 |
+
|----------------|---------:|
|
16 |
+
| Average | - |
|
17 |
|
18 |
The JGLUE scores were measured using the following script:
|
19 |
[Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable)
|