patrickvonplaten
commited on
Commit
•
91c21bb
1
Parent(s):
c1e8dfb
Add contribution note
Browse files
README.md
CHANGED
@@ -22,6 +22,8 @@ Model card from the original [repo](https://github.com/paperswithcode/galai/blob
|
|
22 |
|
23 |
Following [Mitchell et al. (2018)](https://arxiv.org/abs/1810.03993), this model card provides information about the GALACTICA model, how it was trained, and the intended use cases. Full details about how the model was trained and evaluated can be found in the [release paper](https://galactica.org/paper.pdf).
|
24 |
|
|
|
|
|
25 |
## Model Details
|
26 |
|
27 |
The GALACTICA models are trained on a large-scale scientific corpus. The models are designed to perform scientific tasks, including but not limited to citation prediction, scientific QA, mathematical reasoning, summarization, document generation, molecular property prediction and entity extraction. The models were developed by the Papers with Code team at Meta AI to study the use of language models for the automatic organization of science. We train models with sizes ranging from 125M to 120B parameters. Below is a summary of the released models:
|
|
|
22 |
|
23 |
Following [Mitchell et al. (2018)](https://arxiv.org/abs/1810.03993), this model card provides information about the GALACTICA model, how it was trained, and the intended use cases. Full details about how the model was trained and evaluated can be found in the [release paper](https://galactica.org/paper.pdf).
|
24 |
|
25 |
+
**This model checkpoint was integrated into the Hub by [Manuel Romero](https://huggingface.co/mrm8488)**
|
26 |
+
|
27 |
## Model Details
|
28 |
|
29 |
The GALACTICA models are trained on a large-scale scientific corpus. The models are designed to perform scientific tasks, including but not limited to citation prediction, scientific QA, mathematical reasoning, summarization, document generation, molecular property prediction and entity extraction. The models were developed by the Papers with Code team at Meta AI to study the use of language models for the automatic organization of science. We train models with sizes ranging from 125M to 120B parameters. Below is a summary of the released models:
|