Bohr commited on
Commit
6469832
โ€ข
1 Parent(s): 58abcbf

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## ๐Ÿ“– Introduction
2
+
3
+ **Qwen2-7B-Instruct-Exp** and **Qwen2-1.5B-Instruct-Exp** are powerful large language models that can expand instructions with same task type but of different content.
4
+
5
+ We fine-tuned **Qwen2-7B-Instruct** and **Qwen2-1.5B-Instruct-Exp** to obtain **Qwen2-7B-Instruct-Exp** and **Qwen2-1.5B-Instruct-Exp**.
6
+ We sampled the dataset from OpenHermes and the LCCD dataset, ensuring a balanced task distribution. For training set annotations, we used Qwen-max with incorporated our handwritten examples as in-context prompts.
7
+
8
+ #### Example Input
9
+ > Plan an in depth tour itinerary of France that includes Paris, Lyon, and Provence.
10
+ #### Example Output 1
11
+ > Describe a classic road trip itinerary along the California coastline in the United States.
12
+ #### Example Output 2
13
+ > Create a holiday plan that combines cultural experiences in Bangkok, Thailand, with beach relaxation in Phuket.
14
+
15
+
16
+
17
+ ## ๐Ÿš€ Quick Start
18
+
19
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
20
+
21
+ ```python
22
+ from transformers import AutoModelForCausalLM, AutoTokenizer
23
+ device = "cuda" # the device to load the model onto
24
+
25
+ model = AutoModelForCausalLM.from_pretrained(
26
+ "alibaba-pai/Qwen2-1.5B-Instruct-Exp",
27
+ torch_dtype="auto",
28
+ device_map="auto"
29
+ )
30
+ tokenizer = AutoTokenizer.from_pretrained("alibaba-pai/Qwen2-1.5B-Instruct-Exp")
31
+
32
+ prompt = "Give me a short introduction to large language model."
33
+ messages = [
34
+ {"role": "user", "content": prompt}
35
+ ]
36
+ text = tokenizer.apply_chat_template(
37
+ messages,
38
+ tokenize=False,
39
+ add_generation_prompt=True
40
+ )
41
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
42
+
43
+ generated_ids = model.generate(
44
+ model_inputs.input_ids,
45
+ max_new_tokens=2048๏ผŒ
46
+ eos_token_id=151645๏ผŒ
47
+ )
48
+ generated_ids = [
49
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
50
+ ]
51
+
52
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
53
+ ```
54
+
55
+ ## ๐Ÿ” Evaluation
56
+
57
+ We evaluated the data augmentation effect of our model on the Elementary Math and Implicature datasets.
58
+
59
+ | Model | Math | Impl. |
60
+ |--------------------------------|--------|--------|
61
+ | Qwen2-1.5B-Instruct | 57.90% | 28.96% |
62
+ | + Qwen2-1.5B-Instruct-Exp | 59.15% | 31.22% |
63
+ | + Qwen2-7B-Instruct-Exp | 58.32% | 39.37% |
64
+ | Qwen2-7B-Instruct | 71.40% | 28.85% |
65
+ | + Qwen2-1.5B-Instruct-Exp | 73.90% | 35.41% |
66
+ | + Qwen2-7B-Instruct-Exp | 72.53% | 32.92% |