zhibin-msft
commited on
Commit
•
36b29db
1
Parent(s):
8708928
Update README.md
Browse files
README.md
CHANGED
@@ -15,4 +15,5 @@ The Rho-1 series are pretrained language models that utilize Selective Language
|
|
15 |
In math reasoning pretraining, SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.
|
16 |
|
17 |
|
18 |
-
For more details please check our [github](https://github.com/microsoft/rho).
|
|
|
|
15 |
In math reasoning pretraining, SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.
|
16 |
|
17 |
|
18 |
+
For more details please check our [github](https://github.com/microsoft/rho) and [paper](https://arxiv.org/abs/2404.07965).
|
19 |
+
|