Text Generation
Transformers
Safetensors
English
mistral
text-generation-inference
Inference Endpoints
instruction-pretrain commited on
Commit
e3ed28d
1 Parent(s): 3cff8f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -7
README.md CHANGED
@@ -6,7 +6,7 @@ datasets:
6
  - instruction-pretrain/ft-instruction-synthesizer-collection
7
  ---
8
  # Instruction Pre-Training: Language Models are Supervised Multitask Learners
9
- This repo contains the **context-based instruction synthesizer** used in our paper **Instruction Pre-Training: Language Models are Supervised Multitask Learners**.
10
 
11
  We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. ***Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training.** In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning. In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.
12
 
@@ -28,7 +28,7 @@ We explore supervised multitask pre-training by proposing ***Instruction Pre-Tra
28
  - [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
29
 
30
 
31
- ## Synthesize Instruction-Response Pairs based on Any Raw text
32
  We conduct multitask fine-tuning on a language model to develop an instruction synthesizer capable of generating instruction-response pairs from any raw text. The fine-tuning data are available at [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection)
33
 
34
 
@@ -36,7 +36,7 @@ We conduct multitask fine-tuning on a language model to develop an instruction s
36
  <img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/0889QyG59QM3rPeZlcTzZ.png" width="700">
37
  </p>
38
 
39
- For example, to prompt the synthesizer to generate instruction-response pairs based on a given raw text:
40
  ```python
41
  from transformers import AutoModelForCausalLM, AutoTokenizer
42
 
@@ -75,13 +75,13 @@ def get_instruction_response_pairs(context):
75
  '''Prompt the synthesizer to generate instruction-response pairs based on the given context'''
76
  prompt = f'<s> <CON> {context} </CON>\n\n'
77
  inputs = tokenizer(prompt, add_special_tokens=False, return_tensors="pt").input_ids.to(model.device)
78
- outputs = model.generate(input_ids=inputs, max_new_tokens=400)[0]
79
 
80
  pred_start = int(inputs.shape[-1])
81
  pred = tokenizer.decode(outputs[pred_start:], skip_special_tokens=True)
82
  return parse_pred(pred)
83
 
84
- # Get the list of generated instruction-response paris
85
  instruction_response_pairs = get_instruction_response_pairs(context)
86
 
87
  # Print out the results
@@ -90,8 +90,106 @@ for index, pair in enumerate(instruction_response_pairs):
90
  print(f'## Instruction {index + 1}:\n{pair["Q"]}\n## Response {index + 1}:\n{pair["A"]}\n')
91
  ```
92
 
93
- ### To-Do
94
- - [ ] Add example usages for synthesizing few-shot examples
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
95
 
96
  ## Citation
97
  If you find our work helpful, please cite us:
 
6
  - instruction-pretrain/ft-instruction-synthesizer-collection
7
  ---
8
  # Instruction Pre-Training: Language Models are Supervised Multitask Learners
9
+ This repo contains the **context-based instruction synthesizer** in our paper **Instruction Pre-Training: Language Models are Supervised Multitask Learners**.
10
 
11
  We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. ***Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training.** In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning. In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.
12
 
 
28
  - [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
29
 
30
 
31
+ ## Synthesize Instruction-Response Pairs to Augment Any Raw Corpora
32
  We conduct multitask fine-tuning on a language model to develop an instruction synthesizer capable of generating instruction-response pairs from any raw text. The fine-tuning data are available at [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection)
33
 
34
 
 
36
  <img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/0889QyG59QM3rPeZlcTzZ.png" width="700">
37
  </p>
38
 
39
+ ### Basic Usage: Synthesize instruction-response pairs based on a given raw text
40
  ```python
41
  from transformers import AutoModelForCausalLM, AutoTokenizer
42
 
 
75
  '''Prompt the synthesizer to generate instruction-response pairs based on the given context'''
76
  prompt = f'<s> <CON> {context} </CON>\n\n'
77
  inputs = tokenizer(prompt, add_special_tokens=False, return_tensors="pt").input_ids.to(model.device)
78
+ outputs = model.generate(input_ids=inputs, max_new_tokens=400, do_sample=False)[0]
79
 
80
  pred_start = int(inputs.shape[-1])
81
  pred = tokenizer.decode(outputs[pred_start:], skip_special_tokens=True)
82
  return parse_pred(pred)
83
 
84
+ # Get the generated instruction-response paris
85
  instruction_response_pairs = get_instruction_response_pairs(context)
86
 
87
  # Print out the results
 
90
  print(f'## Instruction {index + 1}:\n{pair["Q"]}\n## Response {index + 1}:\n{pair["A"]}\n')
91
  ```
92
 
93
+ ### Advanced Usage: Synthesize Few-shot Examples
94
+ A one-shot example consists of a piece of raw text followed by its instruction-response pairs. You can conduct multi-round inferece to synthesize a few-shot example: the instruction-response pairs of different raw texts share the same pattern.
95
+
96
+ To accelerate synthesis, we use the [vLLM framework](https://github.com/vllm-project/vllm?tab=readme-ov-file):
97
+ <details>
98
+ <summary> Click to expand </summary>
99
+
100
+ 1. Set up dependencies:
101
+ Install vLLM with pip or from [source](https://vllm.readthedocs.io/en/latest/getting_started/installation.html#build-from-source):
102
+
103
+ ```bash
104
+ pip install vllm
105
+ ```
106
+
107
+ 2. Synthesize:
108
+ ```python
109
+ from vllm import LLM, SamplingParams
110
+
111
+ # Put your list of raw texts here,
112
+ # a list of M raw texts can be coverted into an M-shot example:
113
+ text_list = [
114
+ "Genetically and medically susceptible workers.\nThe likelihood of an individual becoming ill from a hazardous material or condition is strongly influenced by both their genetic makeup and their underlying state of health. Although the past decade has seen great advances in understanding human variation in health and genetic polymorphisms and in the diagnosis and treatment of disease, much less progress has been made in effectively using this information to protect worker health. Scientific evidence for increased susceptibility often is weak and rarely satisfies legal thresholds for sufficient risk to warrant exclusion from a particular job. When public safety is a major concern, many legally mandated exclusions are not well justified. Medical opinions about fitness to work should be based upon a systematic and credible analysis of the condition, its relationship to ability and risk for a particular job, and knowledge of possible accommodations. Conclusions should reflect the limitations of scientific knowledge and guidance from antidiscrimination legislation.",
115
+ "Exclusive Breastfeeding for Twin Babies and Its Influencing Factors: A Study in East Java, Indonesia.\nThis study aimed to identify the factors that influence the success of exclusive breastfeeding in twins. This cross-sectional study was conducted on 184 mothers who had twins aged 6-23 months in Malang Raya, East Java, Indonesia and used the consecutive sampling technique. The data was collected through distributing questionnaires containing questions related to knowledge about exclusive breastfeeding, breastfeeding self-efficacy, and the support of family and certified health workers. Multinomial regression statistical test results show that the most influential factor for the success of exclusive breastfeeding with twins was breastfeeding self-efficacy (OR 0.111; 95% CI 0.033-0.387). A high level of breastfeeding self-efficacy can increase a mother's confidence to be able to provide exclusive breastfeeding for twins. This study suggests that nurses can provide breastfeeding counselling to improve breastfeeding self-efficacy."]
116
+
117
+ # Create a sampling params object.
118
+ sampling_params = SamplingParams(temperature=0, max_tokens=400)
119
+
120
+ # Load the model and tokenizer
121
+ llm = LLM(model="instruction-pretrain/instruction-synthesizer", max_model_len=4096)
122
+
123
+ # Templates (please do NOT change them)
124
+ context_template = ' <CON> {context} </CON>'
125
+ QA_template = '<QUE> {question} <ANS> {answer} </END>'
126
+ delimiter = '\n\n'
127
+ bos_token = '<s>'
128
+ eos_token = '</s>'
129
+
130
+ def cook_context(raw_context):
131
+ """Format the context."""
132
+ return bos_token + context_template.replace('{context}', raw_context) + delimiter
133
+
134
+ def cook_instruction_response_pairs(QA_list):
135
+ """Format downstream instruction(Q)-response(A) pairs."""
136
+ ins_res_list = []
137
+ for qa_entry in QA_list:
138
+ qa = QA_template.replace('{question}', qa_entry['Q']).replace('{answer}', qa_entry['A'])
139
+ ins_res_list.append(qa)
140
+ return delimiter.join(ins_res_list) + eos_token
141
+
142
+ def parse_pred(pred):
143
+ """Extract the list of instruction-response pairs from the prediction"""
144
+ QA_str_list = pred.split('</END>')
145
+ if not pred.endswith('</END>'):
146
+ QA_str_list = QA_str_list[:-1]
147
+
148
+ QA_list = []
149
+ raw_questions = []
150
+ for QA_str in QA_str_list:
151
+ try:
152
+ assert len(QA_str.split('<ANS>')) == 2, f'invalid QA string: {QA_str}'
153
+ Q_str, A_str = QA_str.split('<ANS>')
154
+ Q_str, A_str = Q_str.strip(), A_str.strip()
155
+ assert Q_str.startswith('<QUE>'), f'invalid question string: {Q_str} in QA_str: {QA_str}'
156
+ assert len(A_str) > 0, f'invalid answer string in QA_str: {QA_str}'
157
+ Q_str = Q_str.replace('<QUE>', '').strip()
158
+ assert Q_str.lower() not in raw_questions, f'duplicate question: {Q_str}'
159
+ QA_list.append({'Q': Q_str, 'A': A_str})
160
+ raw_questions.append(Q_str.lower())
161
+ except:
162
+ pass
163
+
164
+ return QA_list
165
+
166
+ def get_instruction_response_pairs(context):
167
+ '''Prompt the synthesizer to generate instruction-response pairs based on the given context'''
168
+ outputs = llm.generate(context, sampling_params, use_tqdm=False)
169
+ pred = outputs[0].outputs[0].text
170
+ return parse_pred(pred)
171
+
172
+ # Process each text and generate instruction-response pairs in multi-round inference:
173
+ previous_examples = []
174
+ for cur_text in text_list:
175
+ # Prepend raw texts and instruction-response pairs of previous examples to the current text
176
+ context = ''
177
+ for previous_example in previous_examples:
178
+ context += cook_context(previous_example['text']) + cook_instruction_response_pairs(previous_example['instruction_response_pairs'])
179
+ context += cook_context(cur_text)
180
+
181
+ # Get the generated instruction-response paris
182
+ instruction_response_pairs = get_instruction_response_pairs(context)
183
+ previous_examples.append({'text': cur_text, 'instruction_response_pairs': instruction_response_pairs})
184
+
185
+ # Concatenate the raw texts and instruction-response pairs of M rounds to consititute an M-shot example
186
+ for example in previous_examples:
187
+ print(f'# Raw Text:\n{example["text"]}\n')
188
+ for index, pair in enumerate(example['instruction_response_pairs']):
189
+ print(f'## Instruction {index + 1}:\n{pair["Q"]}\n## Response {index + 1}:\n{pair["A"]}\n')
190
+ ```
191
+ </details>
192
+
193
 
194
  ## Citation
195
  If you find our work helpful, please cite us: