Update README.md
Browse files
README.md
CHANGED
@@ -14,12 +14,32 @@ configs:
|
|
14 |
path: "unfair_tos/test.json"
|
15 |
---
|
16 |
|
17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
```bibtex
|
19 |
@inproceedings{AdaptLLM,
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
}
|
25 |
-
```
|
|
|
14 |
path: "unfair_tos/test.json"
|
15 |
---
|
16 |
|
17 |
+
# Adapting Large Language Models via Reading Comprehension
|
18 |
+
|
19 |
+
This repo contains the evaluation datasets for our paper [Adapting Large Language Models via Reading Comprehension](https://arxiv.org/pdf/2309.09530.pdf)
|
20 |
+
|
21 |
+
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in **biomedicine, finance, and law domains**. Our 7B model competes with much larger domain-specific models like BloombergGPT-50B. Moreover, our domain-specific reading comprehension texts enhance model performance even on general benchmarks, indicating potential for developing a general LLM across more domains.
|
22 |
+
|
23 |
+
## GitHub repo:
|
24 |
+
https://github.com/microsoft/LMOps
|
25 |
+
|
26 |
+
## Domain-specific LLMs:
|
27 |
+
Our models of different domains are now available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
|
28 |
+
|
29 |
+
<p align='center'>
|
30 |
+
<img src="./comparison.png" width="700">
|
31 |
+
</p>
|
32 |
+
|
33 |
+
## Domain-specific Tasks:
|
34 |
+
To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
|
35 |
+
|
36 |
+
|
37 |
+
## Citation:
|
38 |
```bibtex
|
39 |
@inproceedings{AdaptLLM,
|
40 |
+
title={Adapting Large Language Models via Reading Comprehension},
|
41 |
+
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
|
42 |
+
url={https://arxiv.org/abs/2309.09530},
|
43 |
+
year={2023},
|
44 |
}
|
45 |
+
```
|