Update README.md
Browse files
README.md
CHANGED
@@ -117,7 +117,7 @@ model-index:
|
|
117 |
name: Open LLM Leaderboard
|
118 |
---
|
119 |
|
120 |
-
# Adapting
|
121 |
This repo contains the domain-specific chat model developed from **LLaMA-2-Chat-7B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
|
122 |
|
123 |
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
|
@@ -181,7 +181,7 @@ outputs = model.generate(input_ids=inputs, max_length=4096)[0]
|
|
181 |
answer_start = int(inputs.shape[-1])
|
182 |
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
|
183 |
|
184 |
-
print(
|
185 |
```
|
186 |
### LLaMA-3-8B (💡New!)
|
187 |
In our recent research on [Instruction-Pretrain](https://huggingface.co/papers/2406.14491), we developed a context-based instruction synthesizer to augment the raw corpora with instruction-response pairs, **enabling Llama3-8B to be comparable to or even outperform Llama3-70B**: [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B), [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B).
|
|
|
117 |
name: Open LLM Leaderboard
|
118 |
---
|
119 |
|
120 |
+
# Adapting LLMs to Domains via Continual Pre-Training (ICLR 2024)
|
121 |
This repo contains the domain-specific chat model developed from **LLaMA-2-Chat-7B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
|
122 |
|
123 |
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
|
|
|
181 |
answer_start = int(inputs.shape[-1])
|
182 |
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
|
183 |
|
184 |
+
print(pred)
|
185 |
```
|
186 |
### LLaMA-3-8B (💡New!)
|
187 |
In our recent research on [Instruction-Pretrain](https://huggingface.co/papers/2406.14491), we developed a context-based instruction synthesizer to augment the raw corpora with instruction-response pairs, **enabling Llama3-8B to be comparable to or even outperform Llama3-70B**: [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B), [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B).
|