Update README.md
Browse files
README.md
CHANGED
@@ -14,22 +14,32 @@ should probably proofread and complete it, then remove this comment. -->
|
|
14 |
|
15 |
# output
|
16 |
|
17 |
-
This model is a fine-tuned version of [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) on the [Lila dataset](https://github.com/allenai/Lila).
|
18 |
-
It achieves the following results on the evaluation set:
|
19 |
-
- Loss: 0.5884
|
20 |
-
- Accuracy: 0.8664
|
21 |
-
|
22 |
## Model description
|
23 |
|
24 |
-
|
25 |
|
26 |
## Intended uses & limitations
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
## Training procedure
|
35 |
|
|
|
14 |
|
15 |
# output
|
16 |
|
|
|
|
|
|
|
|
|
|
|
17 |
## Model description
|
18 |
|
19 |
+
This model is a fine-tuned version of [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) on the Lila-IID-train/dev set from the [Lila dataset](https://github.com/allenai/Lila).
|
20 |
|
21 |
## Intended uses & limitations
|
22 |
|
23 |
+
If you use this model, please cite our work.
|
24 |
+
```
|
25 |
+
@INPROCEEDINGS{Mishra2022Lila,
|
26 |
+
author = {
|
27 |
+
Swaroop Mishra
|
28 |
+
and Matthew Finlayson
|
29 |
+
and Pan Lu
|
30 |
+
and Leonard Tang
|
31 |
+
and Sean Welleck
|
32 |
+
and Chitta Baral
|
33 |
+
and Tanmay Rajpurohit
|
34 |
+
and Oyvind Tafjord
|
35 |
+
and Ashish Sabharwal
|
36 |
+
and Peter Clark
|
37 |
+
and Ashwin Kalyan},
|
38 |
+
title = {Lila: A Unified Benchmark for Mathematical Reasoning},
|
39 |
+
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
|
40 |
+
year = {2022}
|
41 |
+
}
|
42 |
+
```
|
43 |
|
44 |
## Training procedure
|
45 |
|