AdaptLLM commited on
Commit
b06f810
β€’
1 Parent(s): dfc76e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -1
README.md CHANGED
@@ -30,6 +30,7 @@ We explore **continued pre-training on domain-specific corpora** for large langu
30
  ### πŸ€— We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! πŸ€—
31
 
32
  **************************** **Updates** ****************************
 
33
  * 2024/1/16: πŸŽ‰ Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!πŸŽ‰
34
  * 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B.
35
  * 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B.
@@ -86,10 +87,36 @@ print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
86
  ```
87
 
88
  ## Domain-Specific Tasks
89
- To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
 
 
90
 
91
  **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
92
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
  ## Citation
94
  If you find our work helpful, please cite us:
95
  ```bibtex
 
30
  ### πŸ€— We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! πŸ€—
31
 
32
  **************************** **Updates** ****************************
33
+ * 2024/4/2: Released the raw data splits (train and test) of all the evaluation datasets
34
  * 2024/1/16: πŸŽ‰ Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!πŸŽ‰
35
  * 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B.
36
  * 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B.
 
87
  ```
88
 
89
  ## Domain-Specific Tasks
90
+
91
+ ### Pre-templatized/Formatted Testing Splits
92
+ To easily reproduce our prompting results, we have uploaded the filled-in zero/few-shot input instructions and output completions of the test each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
93
 
94
  **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
95
 
96
+ ### Raw Datasets
97
+ We have also uploaded the raw training and testing splits, for facilitating fine-tuning or other usages:
98
+ - [ChemProt](https://huggingface.co/datasets/AdaptLLM/ChemProt)
99
+ - [RCT](https://huggingface.co/datasets/AdaptLLM/RCT)
100
+ - [ConvFinQA](https://huggingface.co/datasets/AdaptLLM/ConvFinQA)
101
+ - [FiQA_SA](https://huggingface.co/datasets/AdaptLLM/FiQA_SA)
102
+ - [Headline](https://huggingface.co/datasets/AdaptLLM/Headline)
103
+ - [NER](https://huggingface.co/datasets/AdaptLLM/NER)
104
+
105
+ The other datasets used in our paper have already been available in huggingface, so you can directly load them with the following code
106
+ ```python
107
+ from datasets import load_dataset
108
+ # MQP:
109
+ dataset = load_dataset('medical_questions_pairs')
110
+ # PubmedQA:
111
+ dataset = load_dataset('bigbio/pubmed_qa')
112
+ # SCOTUS
113
+ dataset = load_dataset("lex_glue", 'scotus')
114
+ # CaseHOLD
115
+ dataset = load_dataset("lex_glue", 'case_hold')
116
+ # UNFAIR-ToS
117
+ dataset = load_dataset("lex_glue", 'unfair_tos')
118
+ ```
119
+
120
  ## Citation
121
  If you find our work helpful, please cite us:
122
  ```bibtex