Bohr commited on
Commit
039a1fa
β€’
1 Parent(s): 61d072d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62aba5ebab9ed4f63c36b1e2/47PZcc9QTR_okQIvKeOLn.png" alt="image/png" style="transform: scale(1);">
2
+
3
+
4
+ ## πŸ“– Introduction
5
+
6
+ **Qwen2-7B-Instruct-Refine** and **Qwen2-1.5B-Instruct-Refine** are two powerful large language models that act as proficient prompt engineers. They can optimize and refine the prompts input by users, and the generated optimized instructions can significantly enhance the LLM's ability to produce better and more informative responses for users.
7
+
8
+ We fine-tuned **Qwen2-7B-Instruct** and **Qwen2-1.5B-Instruct** to obtain **Qwen2-7B-Instruct-Refine** and **Qwen2-1.5B-Instruct-Refine**.
9
+ We sampled the dataset from OpenHermes and the LCCD dataset, ensuring a balanced task distribution. For training set annotations, we used Qwen-max with incorporated our handwritten examples as in-context prompts.
10
+
11
+ ## πŸš€ Quick Start
12
+
13
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
14
+
15
+ ```python
16
+ from transformers import AutoModelForCausalLM, AutoTokenizer
17
+ device = "cuda" # the device to load the model onto
18
+
19
+ model = AutoModelForCausalLM.from_pretrained(
20
+ "alibaba-pai/Qwen2-1.5B-Instruct-Refine",
21
+ torch_dtype="auto",
22
+ device_map="auto"
23
+ )
24
+ tokenizer = AutoTokenizer.from_pretrained("alibaba-pai/Qwen2-1.5B-Instruct-Refine")
25
+
26
+ prompt = "Give me a short introduction to large language model."
27
+ messages = [
28
+ {"role": "user", "content": prompt}
29
+ ]
30
+ text = tokenizer.apply_chat_template(
31
+ messages,
32
+ tokenize=False,
33
+ add_generation_prompt=True
34
+ )
35
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
36
+
37
+ generated_ids = model.generate(
38
+ model_inputs.input_ids,
39
+ max_new_tokens=2048,
40
+ eos_token_id=151645,
41
+ )
42
+ generated_ids = [
43
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
44
+ ]
45
+
46
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
47
+ ```
48
+
49
+ ## πŸ” Evaluation
50
+
51
+ We used single-turn instructions from MT-Bench as input for Qwen2-1.5B-Instruct and Qwen2-7B-Instruct. GPT4-turbo is used to evaluate the changes in the level of detail and truthfulness of responses to our model's revised instructions.
52
+
53
+ | Model | Detail | Truthfulness |
54
+ |:----------------------------:|:------:|:------------:|
55
+ | Qwen2-1.5B-Instruct | 50.00% | 50.00% |
56
+ | + Qwen2-1.5B-Instruct-Refine | 75.63% | 63.75% |
57
+ | + Qwen2-7B-Instruct-Refine | 76.56% | 62.19% |
58
+ | Qwen2-7B-Instruct | 50.00% | 50.00% |
59
+ | + Qwen2-1.5B-Instruct-Refine | 70.94% | 57.19% |
60
+ | + Qwen2-7B-Instruct-Refine | 74.69% | 58.44% |
61
+
62
+
63
+ ## πŸ“œ Citation
64
+
65
+ If you find our work helpful, please cite it!
66
+
67
+ ```
68
+ @misc{TAPIR,
69
+ title={Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning},
70
+ author={Yuanhao Yue and Chengyu Wang and Jun Huang and Peng Wang},
71
+ year={2024},
72
+ eprint={2405.13448},
73
+ archivePrefix={arXiv},
74
+ primaryClass={cs.CL},
75
+ url={https://arxiv.org/abs/2405.13448},
76
+ }
77
+ ```