Update README.md
Browse files
README.md
CHANGED
@@ -123,14 +123,16 @@ This repo contains the domain-specific chat model developed from **LLaMA-2-Chat-
|
|
123 |
|
124 |
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
|
125 |
|
126 |
-
### [2024/
|
127 |
|
128 |
**************************** **Updates** ****************************
|
|
|
|
|
129 |
* 2024/8/29: Updated [guidelines](https://huggingface.co/datasets/AdaptLLM/finance-tasks) on evaluating any 🤗Huggingface models on the domain-specific tasks
|
130 |
* 2024/6/22: Released the [benchmarking code](https://github.com/microsoft/LMOps/tree/main/adaptllm)
|
131 |
-
* 2024/6/21: Released the
|
132 |
* 2024/4/2: Released the [raw data splits (train and test)](https://huggingface.co/datasets/AdaptLLM/ConvFinQA) of all the evaluation datasets
|
133 |
-
* 2024/1/16: Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024
|
134 |
* 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B
|
135 |
* 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B
|
136 |
* 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B
|
|
|
123 |
|
124 |
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
|
125 |
|
126 |
+
### [2024/11/29] 🤗 Introduce the multimodal version of AdaptLLM at [AdaMLLM](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains), for adapting MLLMs to domains 🤗
|
127 |
|
128 |
**************************** **Updates** ****************************
|
129 |
+
* 2024/11/29: Released [AdaMLLM](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains) for adapting MLLMs to domains
|
130 |
+
* 2024/9/20: Our [research paper for Instruction-Pretrain](https://huggingface.co/papers/2406.14491) has been accepted by EMNLP 2024
|
131 |
* 2024/8/29: Updated [guidelines](https://huggingface.co/datasets/AdaptLLM/finance-tasks) on evaluating any 🤗Huggingface models on the domain-specific tasks
|
132 |
* 2024/6/22: Released the [benchmarking code](https://github.com/microsoft/LMOps/tree/main/adaptllm)
|
133 |
+
* 2024/6/21: Released the general version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain)
|
134 |
* 2024/4/2: Released the [raw data splits (train and test)](https://huggingface.co/datasets/AdaptLLM/ConvFinQA) of all the evaluation datasets
|
135 |
+
* 2024/1/16: Our [research paper for AdaptLLM](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024
|
136 |
* 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B
|
137 |
* 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B
|
138 |
* 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B
|