David-Xu commited on
Commit
89bd182
1 Parent(s): 911c464

Training in progress, step 900

Browse files
README.md CHANGED
@@ -1,201 +1,80 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
 
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
 
 
 
 
 
 
 
 
 
 
10
 
 
11
 
12
- ## Model Details
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
 
 
 
 
 
 
 
 
 
35
 
36
- ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
 
 
 
 
 
 
 
39
 
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
200
 
 
201
 
 
 
 
 
 
 
1
  ---
2
+ library_name: peft
3
+ tags:
4
+ - alignment-handbook
5
+ - generated_from_trainer
6
+ datasets:
7
+ - David-Xu/astronomy-stack-dpo-20-percent
8
+ base_model: meta-llama/Llama-2-7b-chat-hf
9
+ model-index:
10
+ - name: cira-7b-dpo-lora-merge
11
+ results: []
12
  ---
13
 
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # cira-7b-dpo-lora-merge
18
 
19
+ This model is a fine-tuned version of [David-Xu/llama-2-7b-cira-sft-v0.1-merge](https://huggingface.co/David-Xu/llama-2-7b-cira-sft-v0.1-merge) on the David-Xu/astronomy-stack-dpo-20-percent dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.6183
22
+ - Rewards/chosen: 0.5535
23
+ - Rewards/rejected: 0.3385
24
+ - Rewards/accuracies: 0.6784
25
+ - Rewards/margins: 0.2150
26
+ - Logps/rejected: -652.2422
27
+ - Logps/chosen: -795.1126
28
+ - Logits/rejected: -1.1812
29
+ - Logits/chosen: -1.0305
30
 
31
+ ## Model description
32
 
33
+ More information needed
34
 
35
+ ## Intended uses & limitations
36
 
37
+ More information needed
38
 
39
+ ## Training and evaluation data
40
 
41
+ More information needed
 
 
 
 
 
 
42
 
43
+ ## Training procedure
44
 
45
+ ### Training hyperparameters
46
 
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 5e-06
49
+ - train_batch_size: 1
50
+ - eval_batch_size: 1
51
+ - seed: 42
52
+ - distributed_type: multi-GPU
53
+ - gradient_accumulation_steps: 4
54
+ - total_train_batch_size: 4
55
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56
+ - lr_scheduler_type: cosine
57
+ - lr_scheduler_warmup_ratio: 0.1
58
+ - num_epochs: 1
59
 
60
+ ### Training results
61
 
62
+ | Training Loss | Epoch | Step | Logits/chosen | Logits/rejected | Logps/chosen | Logps/rejected | Validation Loss | Rewards/accuracies | Rewards/chosen | Rewards/margins | Rewards/rejected |
63
+ |:-------------:|:-----:|:----:|:-------------:|:---------------:|:------------:|:--------------:|:---------------:|:------------------:|:--------------:|:---------------:|:----------------:|
64
+ | 0.6618 | 0.11 | 100 | -0.8082 | -1.0029 | -823.6102 | -665.3923 | 0.6664 | 0.6432 | 0.2685 | 0.0615 | 0.2070 |
65
+ | 0.6079 | 0.22 | 200 | -1.0530 | -1.2188 | -794.3279 | -642.6389 | 0.6463 | 0.6508 | 0.5613 | 0.1268 | 0.4345 |
66
+ | 0.6029 | 0.33 | 300 | -1.0367 | -1.1965 | -793.2078 | -644.8513 | 0.6360 | 0.6558 | 0.5725 | 0.1601 | 0.4124 |
67
+ | 0.6123 | 0.45 | 400 | -1.1220 | -1.2658 | -787.7750 | -641.9633 | 0.6291 | 0.6608 | 0.6269 | 0.1856 | 0.4413 |
68
+ | 0.5596 | 0.56 | 500 | -1.0852 | -1.2330 | -790.7928 | -646.7930 | 0.6230 | 0.6683 | 0.5967 | 0.2037 | 0.3930 |
69
+ | 0.5382 | 0.67 | 600 | -1.0547 | -1.2034 | -793.2486 | -650.0926 | 0.6199 | 0.6709 | 0.5721 | 0.2121 | 0.3600 |
70
+ | 0.5952 | 0.78 | 700 | -1.0324 | -1.1827 | -794.9604 | -652.0420 | 0.6186 | 0.6784 | 0.5550 | 0.2145 | 0.3405 |
71
+ | 0.5792 | 0.89 | 800 | -1.0308 | -1.1812 | -795.125 | -652.2705 | 0.6182 | 0.6784 | 0.5534 | 0.2151 | 0.3382 |
72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
 
74
+ ### Framework versions
75
 
76
+ - PEFT 0.9.0
77
+ - Transformers 4.36.2
78
+ - Pytorch 2.1.0+cu121
79
+ - Datasets 2.14.6
80
+ - Tokenizers 0.15.2
adapter_config.json CHANGED
@@ -19,12 +19,12 @@
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
22
- "k_proj",
23
- "down_proj",
24
  "q_proj",
25
- "up_proj",
26
  "gate_proj",
27
  "o_proj",
 
 
 
28
  "v_proj"
29
  ],
30
  "task_type": "CAUSAL_LM",
 
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
 
 
22
  "q_proj",
 
23
  "gate_proj",
24
  "o_proj",
25
+ "k_proj",
26
+ "down_proj",
27
+ "up_proj",
28
  "v_proj"
29
  ],
30
  "task_type": "CAUSAL_LM",
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:02cd843ea653e59041e65b527f9f2a98a74e214e3a7970d99db9c97d659ea17d
3
  size 639692768
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fadb5d5892b0d10979d86e5f85e2b4b6c90a1ae86d47faa3a10327e89e99222
3
  size 639692768
all_results.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "eval_logits/chosen": -1.030522108078003,
4
+ "eval_logits/rejected": -1.1812418699264526,
5
+ "eval_logps/chosen": -795.1126098632812,
6
+ "eval_logps/rejected": -652.2422485351562,
7
+ "eval_loss": 0.6183284521102905,
8
+ "eval_rewards/accuracies": 0.6783919334411621,
9
+ "eval_rewards/chosen": 0.5534913539886475,
10
+ "eval_rewards/margins": 0.214975506067276,
11
+ "eval_rewards/rejected": 0.33851587772369385,
12
+ "eval_runtime": 181.6928,
13
+ "eval_samples": 398,
14
+ "eval_samples_per_second": 2.191,
15
+ "eval_steps_per_second": 2.191,
16
+ "train_loss": 0.06080360662445443,
17
+ "train_runtime": 395.6281,
18
+ "train_samples": 3588,
19
+ "train_samples_per_second": 9.069,
20
+ "train_steps_per_second": 2.267
21
+ }
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "meta-llama/Llama-2-7b-chat-hf",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 4096,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 11008,
14
+ "max_position_embeddings": 4096,
15
+ "model_type": "llama",
16
+ "num_attention_heads": 32,
17
+ "num_hidden_layers": 32,
18
+ "num_key_value_heads": 32,
19
+ "pretraining_tp": 1,
20
+ "quantization_config": {
21
+ "bnb_4bit_compute_dtype": "bfloat16",
22
+ "bnb_4bit_quant_type": "nf4",
23
+ "bnb_4bit_use_double_quant": false,
24
+ "llm_int8_enable_fp32_cpu_offload": false,
25
+ "llm_int8_has_fp16_weight": false,
26
+ "llm_int8_skip_modules": null,
27
+ "llm_int8_threshold": 6.0,
28
+ "load_in_4bit": true,
29
+ "load_in_8bit": false,
30
+ "quant_method": "bitsandbytes"
31
+ },
32
+ "rms_norm_eps": 1e-05,
33
+ "rope_scaling": null,
34
+ "rope_theta": 10000.0,
35
+ "tie_word_embeddings": false,
36
+ "torch_dtype": "bfloat16",
37
+ "transformers_version": "4.36.2",
38
+ "use_cache": true,
39
+ "vocab_size": 32000
40
+ }
eval_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "eval_logits/chosen": -1.030522108078003,
4
+ "eval_logits/rejected": -1.1812418699264526,
5
+ "eval_logps/chosen": -795.1126098632812,
6
+ "eval_logps/rejected": -652.2422485351562,
7
+ "eval_loss": 0.6183284521102905,
8
+ "eval_rewards/accuracies": 0.6783919334411621,
9
+ "eval_rewards/chosen": 0.5534913539886475,
10
+ "eval_rewards/margins": 0.214975506067276,
11
+ "eval_rewards/rejected": 0.33851587772369385,
12
+ "eval_runtime": 181.6928,
13
+ "eval_samples": 398,
14
+ "eval_samples_per_second": 2.191,
15
+ "eval_steps_per_second": 2.191
16
+ }
runs/Mar11_07-47-23_b89f062cf3e1/events.out.tfevents.1710143296.b89f062cf3e1.43461.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ccf5c5e3bba46a8c24a4d54113f943a859522edf028ce5efe4857303db15af7c
3
- size 62112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afeb78cb3d05bb0b3fd65e0e81252d3094fc6b18bddbd1927e9b0d10ff8ee11b
3
+ size 68172
runs/Mar11_07-47-23_b89f062cf3e1/events.out.tfevents.1710148786.b89f062cf3e1.43461.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ca4e9756fa96092ca18f9c9a968474e22e6168bb2f1c4065950e2262f724bb7
3
+ size 828
runs/Mar11_09-32-25_b89f062cf3e1/events.out.tfevents.1710149602.b89f062cf3e1.120606.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1751710372b7b45d610cbbcdfb4a04053d9047bcd4dd2d7644d566de1b441ac
3
+ size 11054
runs/Mar11_09-32-25_b89f062cf3e1/events.out.tfevents.1710150179.b89f062cf3e1.120606.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc385d51e5ca55c147ac342e70e1ab64d3bdf0e6b5cd96d975e3f8bbf0bc1c12
3
+ size 828
runs/Mar11_09-57-38_b89f062cf3e1/events.out.tfevents.1710151117.b89f062cf3e1.133799.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1670720e83ef6f3f3079392e34c8861a7b7936823e62a6435c847cb6c39d514f
3
+ size 12074
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.06080360662445443,
4
+ "train_runtime": 395.6281,
5
+ "train_samples": 3588,
6
+ "train_samples_per_second": 9.069,
7
+ "train_steps_per_second": 2.267
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1418 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.0,
5
+ "eval_steps": 100,
6
+ "global_step": 897,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0,
13
+ "learning_rate": 5.555555555555556e-08,
14
+ "logits/chosen": -0.5698858499526978,
15
+ "logits/rejected": -0.7608428597450256,
16
+ "logps/chosen": -628.0617065429688,
17
+ "logps/rejected": -619.150390625,
18
+ "loss": 0.6931,
19
+ "rewards/accuracies": 0.0,
20
+ "rewards/chosen": 0.0,
21
+ "rewards/margins": 0.0,
22
+ "rewards/rejected": 0.0,
23
+ "step": 1
24
+ },
25
+ {
26
+ "epoch": 0.01,
27
+ "learning_rate": 5.555555555555555e-07,
28
+ "logits/chosen": -0.6823484897613525,
29
+ "logits/rejected": -0.8546125888824463,
30
+ "logps/chosen": -777.4924926757812,
31
+ "logps/rejected": -634.3888549804688,
32
+ "loss": 0.6933,
33
+ "rewards/accuracies": 0.3611111044883728,
34
+ "rewards/chosen": -0.0005514526856131852,
35
+ "rewards/margins": -0.00033656222512945533,
36
+ "rewards/rejected": -0.00021489031496457756,
37
+ "step": 10
38
+ },
39
+ {
40
+ "epoch": 0.02,
41
+ "learning_rate": 1.111111111111111e-06,
42
+ "logits/chosen": -0.7127803564071655,
43
+ "logits/rejected": -0.8547202348709106,
44
+ "logps/chosen": -899.9298095703125,
45
+ "logps/rejected": -712.7205810546875,
46
+ "loss": 0.6912,
47
+ "rewards/accuracies": 0.550000011920929,
48
+ "rewards/chosen": 0.005615489557385445,
49
+ "rewards/margins": 0.003956252709031105,
50
+ "rewards/rejected": 0.0016592368483543396,
51
+ "step": 20
52
+ },
53
+ {
54
+ "epoch": 0.03,
55
+ "learning_rate": 1.6666666666666667e-06,
56
+ "logits/chosen": -0.702410101890564,
57
+ "logits/rejected": -0.9075536727905273,
58
+ "logps/chosen": -799.2589111328125,
59
+ "logps/rejected": -717.34716796875,
60
+ "loss": 0.6903,
61
+ "rewards/accuracies": 0.6000000238418579,
62
+ "rewards/chosen": 0.01230540033429861,
63
+ "rewards/margins": 0.005792214069515467,
64
+ "rewards/rejected": 0.006513187196105719,
65
+ "step": 30
66
+ },
67
+ {
68
+ "epoch": 0.04,
69
+ "learning_rate": 2.222222222222222e-06,
70
+ "logits/chosen": -0.7552632093429565,
71
+ "logits/rejected": -0.9524158239364624,
72
+ "logps/chosen": -835.037109375,
73
+ "logps/rejected": -631.7636108398438,
74
+ "loss": 0.6904,
75
+ "rewards/accuracies": 0.574999988079071,
76
+ "rewards/chosen": 0.025444606319069862,
77
+ "rewards/margins": 0.005691047292202711,
78
+ "rewards/rejected": 0.019753558561205864,
79
+ "step": 40
80
+ },
81
+ {
82
+ "epoch": 0.06,
83
+ "learning_rate": 2.7777777777777783e-06,
84
+ "logits/chosen": -0.7381051778793335,
85
+ "logits/rejected": -0.8419072031974792,
86
+ "logps/chosen": -788.6356811523438,
87
+ "logps/rejected": -730.46728515625,
88
+ "loss": 0.6901,
89
+ "rewards/accuracies": 0.6000000238418579,
90
+ "rewards/chosen": 0.04884253069758415,
91
+ "rewards/margins": 0.006506313569843769,
92
+ "rewards/rejected": 0.04233621805906296,
93
+ "step": 50
94
+ },
95
+ {
96
+ "epoch": 0.07,
97
+ "learning_rate": 3.3333333333333333e-06,
98
+ "logits/chosen": -0.6829635500907898,
99
+ "logits/rejected": -0.8546798825263977,
100
+ "logps/chosen": -803.8533325195312,
101
+ "logps/rejected": -685.8370361328125,
102
+ "loss": 0.684,
103
+ "rewards/accuracies": 0.6499999761581421,
104
+ "rewards/chosen": 0.07093875110149384,
105
+ "rewards/margins": 0.01889834739267826,
106
+ "rewards/rejected": 0.052040405571460724,
107
+ "step": 60
108
+ },
109
+ {
110
+ "epoch": 0.08,
111
+ "learning_rate": 3.88888888888889e-06,
112
+ "logits/chosen": -0.6931325197219849,
113
+ "logits/rejected": -0.8457029461860657,
114
+ "logps/chosen": -835.8101806640625,
115
+ "logps/rejected": -559.0534057617188,
116
+ "loss": 0.674,
117
+ "rewards/accuracies": 0.6499999761581421,
118
+ "rewards/chosen": 0.11979933828115463,
119
+ "rewards/margins": 0.04078071564435959,
120
+ "rewards/rejected": 0.07901863008737564,
121
+ "step": 70
122
+ },
123
+ {
124
+ "epoch": 0.09,
125
+ "learning_rate": 4.444444444444444e-06,
126
+ "logits/chosen": -0.7829106450080872,
127
+ "logits/rejected": -1.0052988529205322,
128
+ "logps/chosen": -857.0613403320312,
129
+ "logps/rejected": -581.3321533203125,
130
+ "loss": 0.6535,
131
+ "rewards/accuracies": 0.625,
132
+ "rewards/chosen": 0.18766728043556213,
133
+ "rewards/margins": 0.08457548916339874,
134
+ "rewards/rejected": 0.10309179872274399,
135
+ "step": 80
136
+ },
137
+ {
138
+ "epoch": 0.1,
139
+ "learning_rate": 5e-06,
140
+ "logits/chosen": -0.8815785646438599,
141
+ "logits/rejected": -0.96449214220047,
142
+ "logps/chosen": -789.5969848632812,
143
+ "logps/rejected": -702.9019165039062,
144
+ "loss": 0.6794,
145
+ "rewards/accuracies": 0.5249999761581421,
146
+ "rewards/chosen": 0.20566365122795105,
147
+ "rewards/margins": 0.033137936145067215,
148
+ "rewards/rejected": 0.17252573370933533,
149
+ "step": 90
150
+ },
151
+ {
152
+ "epoch": 0.11,
153
+ "learning_rate": 4.9981058784687895e-06,
154
+ "logits/chosen": -0.8663153648376465,
155
+ "logits/rejected": -1.0621612071990967,
156
+ "logps/chosen": -868.1552734375,
157
+ "logps/rejected": -664.0401611328125,
158
+ "loss": 0.6618,
159
+ "rewards/accuracies": 0.7749999761581421,
160
+ "rewards/chosen": 0.2395782470703125,
161
+ "rewards/margins": 0.06794992089271545,
162
+ "rewards/rejected": 0.17162834107875824,
163
+ "step": 100
164
+ },
165
+ {
166
+ "epoch": 0.11,
167
+ "eval_logits/chosen": -0.8082190752029419,
168
+ "eval_logits/rejected": -1.0028988122940063,
169
+ "eval_logps/chosen": -823.6102294921875,
170
+ "eval_logps/rejected": -665.3922729492188,
171
+ "eval_loss": 0.6664446592330933,
172
+ "eval_rewards/accuracies": 0.643216073513031,
173
+ "eval_rewards/chosen": 0.268514484167099,
174
+ "eval_rewards/margins": 0.061498455703258514,
175
+ "eval_rewards/rejected": 0.20701603591442108,
176
+ "eval_runtime": 181.7686,
177
+ "eval_samples_per_second": 2.19,
178
+ "eval_steps_per_second": 2.19,
179
+ "step": 100
180
+ },
181
+ {
182
+ "epoch": 0.12,
183
+ "learning_rate": 4.992426384032258e-06,
184
+ "logits/chosen": -0.8113915324211121,
185
+ "logits/rejected": -1.1199345588684082,
186
+ "logps/chosen": -766.8582763671875,
187
+ "logps/rejected": -536.8991088867188,
188
+ "loss": 0.6432,
189
+ "rewards/accuracies": 0.824999988079071,
190
+ "rewards/chosen": 0.2860961854457855,
191
+ "rewards/margins": 0.11079935729503632,
192
+ "rewards/rejected": 0.1752968281507492,
193
+ "step": 110
194
+ },
195
+ {
196
+ "epoch": 0.13,
197
+ "learning_rate": 4.982970122812566e-06,
198
+ "logits/chosen": -0.8754690289497375,
199
+ "logits/rejected": -0.9867092370986938,
200
+ "logps/chosen": -783.0337524414062,
201
+ "logps/rejected": -615.1319580078125,
202
+ "loss": 0.6622,
203
+ "rewards/accuracies": 0.6499999761581421,
204
+ "rewards/chosen": 0.3586369454860687,
205
+ "rewards/margins": 0.08097393810749054,
206
+ "rewards/rejected": 0.277662992477417,
207
+ "step": 120
208
+ },
209
+ {
210
+ "epoch": 0.14,
211
+ "learning_rate": 4.969751423856095e-06,
212
+ "logits/chosen": -0.8801660537719727,
213
+ "logits/rejected": -1.0285800695419312,
214
+ "logps/chosen": -857.1593017578125,
215
+ "logps/rejected": -575.841796875,
216
+ "loss": 0.6472,
217
+ "rewards/accuracies": 0.625,
218
+ "rewards/chosen": 0.41047531366348267,
219
+ "rewards/margins": 0.11212178319692612,
220
+ "rewards/rejected": 0.29835352301597595,
221
+ "step": 130
222
+ },
223
+ {
224
+ "epoch": 0.16,
225
+ "learning_rate": 4.952790317420694e-06,
226
+ "logits/chosen": -0.9968269467353821,
227
+ "logits/rejected": -1.1242711544036865,
228
+ "logps/chosen": -922.9993286132812,
229
+ "logps/rejected": -759.6329345703125,
230
+ "loss": 0.6605,
231
+ "rewards/accuracies": 0.699999988079071,
232
+ "rewards/chosen": 0.4612547755241394,
233
+ "rewards/margins": 0.084444060921669,
234
+ "rewards/rejected": 0.3768107295036316,
235
+ "step": 140
236
+ },
237
+ {
238
+ "epoch": 0.17,
239
+ "learning_rate": 4.932112504623876e-06,
240
+ "logits/chosen": -0.8811306953430176,
241
+ "logits/rejected": -1.0080692768096924,
242
+ "logps/chosen": -805.9840698242188,
243
+ "logps/rejected": -609.9173583984375,
244
+ "loss": 0.6289,
245
+ "rewards/accuracies": 0.7250000238418579,
246
+ "rewards/chosen": 0.5022997260093689,
247
+ "rewards/margins": 0.16012665629386902,
248
+ "rewards/rejected": 0.34217312932014465,
249
+ "step": 150
250
+ },
251
+ {
252
+ "epoch": 0.18,
253
+ "learning_rate": 4.907749318497991e-06,
254
+ "logits/chosen": -0.9709693193435669,
255
+ "logits/rejected": -1.1685694456100464,
256
+ "logps/chosen": -752.7962646484375,
257
+ "logps/rejected": -603.7000122070312,
258
+ "loss": 0.6427,
259
+ "rewards/accuracies": 0.675000011920929,
260
+ "rewards/chosen": 0.4435661733150482,
261
+ "rewards/margins": 0.13528670370578766,
262
+ "rewards/rejected": 0.30827948451042175,
263
+ "step": 160
264
+ },
265
+ {
266
+ "epoch": 0.19,
267
+ "learning_rate": 4.879737676511367e-06,
268
+ "logits/chosen": -0.9455941319465637,
269
+ "logits/rejected": -1.234893560409546,
270
+ "logps/chosen": -779.9085083007812,
271
+ "logps/rejected": -608.73291015625,
272
+ "loss": 0.6212,
273
+ "rewards/accuracies": 0.7250000238418579,
274
+ "rewards/chosen": 0.5160066485404968,
275
+ "rewards/margins": 0.17969150841236115,
276
+ "rewards/rejected": 0.3363151252269745,
277
+ "step": 170
278
+ },
279
+ {
280
+ "epoch": 0.2,
281
+ "learning_rate": 4.848120024627372e-06,
282
+ "logits/chosen": -1.0280869007110596,
283
+ "logits/rejected": -1.0985267162322998,
284
+ "logps/chosen": -857.74609375,
285
+ "logps/rejected": -563.0875244140625,
286
+ "loss": 0.6145,
287
+ "rewards/accuracies": 0.6000000238418579,
288
+ "rewards/chosen": 0.5581072568893433,
289
+ "rewards/margins": 0.19067201018333435,
290
+ "rewards/rejected": 0.36743518710136414,
291
+ "step": 180
292
+ },
293
+ {
294
+ "epoch": 0.21,
295
+ "learning_rate": 4.812944272986166e-06,
296
+ "logits/chosen": -1.0628365278244019,
297
+ "logits/rejected": -1.268007516860962,
298
+ "logps/chosen": -884.31884765625,
299
+ "logps/rejected": -716.6842651367188,
300
+ "loss": 0.6429,
301
+ "rewards/accuracies": 0.699999988079071,
302
+ "rewards/chosen": 0.5645692944526672,
303
+ "rewards/margins": 0.12494896352291107,
304
+ "rewards/rejected": 0.43962034583091736,
305
+ "step": 190
306
+ },
307
+ {
308
+ "epoch": 0.22,
309
+ "learning_rate": 4.774263723306599e-06,
310
+ "logits/chosen": -1.1383074522018433,
311
+ "logits/rejected": -1.194709062576294,
312
+ "logps/chosen": -809.2498168945312,
313
+ "logps/rejected": -548.3192138671875,
314
+ "loss": 0.6079,
315
+ "rewards/accuracies": 0.699999988079071,
316
+ "rewards/chosen": 0.5740378499031067,
317
+ "rewards/margins": 0.20076331496238708,
318
+ "rewards/rejected": 0.3732745051383972,
319
+ "step": 200
320
+ },
321
+ {
322
+ "epoch": 0.22,
323
+ "eval_logits/chosen": -1.0529733896255493,
324
+ "eval_logits/rejected": -1.2187565565109253,
325
+ "eval_logps/chosen": -794.3279418945312,
326
+ "eval_logps/rejected": -642.638916015625,
327
+ "eval_loss": 0.6462566256523132,
328
+ "eval_rewards/accuracies": 0.6507537961006165,
329
+ "eval_rewards/chosen": 0.5613374710083008,
330
+ "eval_rewards/margins": 0.12678806483745575,
331
+ "eval_rewards/rejected": 0.4345494508743286,
332
+ "eval_runtime": 182.8948,
333
+ "eval_samples_per_second": 2.176,
334
+ "eval_steps_per_second": 2.176,
335
+ "step": 200
336
+ },
337
+ {
338
+ "epoch": 0.23,
339
+ "learning_rate": 4.732136988118259e-06,
340
+ "logits/chosen": -1.0591025352478027,
341
+ "logits/rejected": -1.1989389657974243,
342
+ "logps/chosen": -871.8209838867188,
343
+ "logps/rejected": -564.6995849609375,
344
+ "loss": 0.617,
345
+ "rewards/accuracies": 0.699999988079071,
346
+ "rewards/chosen": 0.5726070404052734,
347
+ "rewards/margins": 0.1896977722644806,
348
+ "rewards/rejected": 0.38290935754776,
349
+ "step": 210
350
+ },
351
+ {
352
+ "epoch": 0.25,
353
+ "learning_rate": 4.6866279019460744e-06,
354
+ "logits/chosen": -1.0191378593444824,
355
+ "logits/rejected": -1.0795801877975464,
356
+ "logps/chosen": -749.4663696289062,
357
+ "logps/rejected": -598.59619140625,
358
+ "loss": 0.6369,
359
+ "rewards/accuracies": 0.6499999761581421,
360
+ "rewards/chosen": 0.5931397676467896,
361
+ "rewards/margins": 0.16538675129413605,
362
+ "rewards/rejected": 0.4277530312538147,
363
+ "step": 220
364
+ },
365
+ {
366
+ "epoch": 0.26,
367
+ "learning_rate": 4.637805424582033e-06,
368
+ "logits/chosen": -1.0304447412490845,
369
+ "logits/rejected": -1.1981227397918701,
370
+ "logps/chosen": -828.51171875,
371
+ "logps/rejected": -623.6228637695312,
372
+ "loss": 0.6275,
373
+ "rewards/accuracies": 0.75,
374
+ "rewards/chosen": 0.6149128079414368,
375
+ "rewards/margins": 0.18098048865795135,
376
+ "rewards/rejected": 0.4339323043823242,
377
+ "step": 230
378
+ },
379
+ {
380
+ "epoch": 0.27,
381
+ "learning_rate": 4.585743536590599e-06,
382
+ "logits/chosen": -1.041161298751831,
383
+ "logits/rejected": -1.301321268081665,
384
+ "logps/chosen": -936.7018432617188,
385
+ "logps/rejected": -561.6809692382812,
386
+ "loss": 0.5616,
387
+ "rewards/accuracies": 0.8500000238418579,
388
+ "rewards/chosen": 0.6473914384841919,
389
+ "rewards/margins": 0.3042447566986084,
390
+ "rewards/rejected": 0.3431466221809387,
391
+ "step": 240
392
+ },
393
+ {
394
+ "epoch": 0.28,
395
+ "learning_rate": 4.530521127206173e-06,
396
+ "logits/chosen": -0.9962241053581238,
397
+ "logits/rejected": -1.1600590944290161,
398
+ "logps/chosen": -969.3317260742188,
399
+ "logps/rejected": -634.9935302734375,
400
+ "loss": 0.5807,
401
+ "rewards/accuracies": 0.800000011920929,
402
+ "rewards/chosen": 0.6946474313735962,
403
+ "rewards/margins": 0.2879478335380554,
404
+ "rewards/rejected": 0.4066995680332184,
405
+ "step": 250
406
+ },
407
+ {
408
+ "epoch": 0.29,
409
+ "learning_rate": 4.472221874792454e-06,
410
+ "logits/chosen": -1.0495914220809937,
411
+ "logits/rejected": -1.218639612197876,
412
+ "logps/chosen": -753.663818359375,
413
+ "logps/rejected": -488.46539306640625,
414
+ "loss": 0.5902,
415
+ "rewards/accuracies": 0.824999988079071,
416
+ "rewards/chosen": 0.5778111219406128,
417
+ "rewards/margins": 0.25250428915023804,
418
+ "rewards/rejected": 0.32530683279037476,
419
+ "step": 260
420
+ },
421
+ {
422
+ "epoch": 0.3,
423
+ "learning_rate": 4.410934120044838e-06,
424
+ "logits/chosen": -1.087903618812561,
425
+ "logits/rejected": -1.208064317703247,
426
+ "logps/chosen": -869.251953125,
427
+ "logps/rejected": -616.2910766601562,
428
+ "loss": 0.617,
429
+ "rewards/accuracies": 0.675000011920929,
430
+ "rewards/chosen": 0.6315535306930542,
431
+ "rewards/margins": 0.21421785652637482,
432
+ "rewards/rejected": 0.4173356592655182,
433
+ "step": 270
434
+ },
435
+ {
436
+ "epoch": 0.31,
437
+ "learning_rate": 4.346750732128023e-06,
438
+ "logits/chosen": -0.9508038759231567,
439
+ "logits/rejected": -1.128430724143982,
440
+ "logps/chosen": -715.2760009765625,
441
+ "logps/rejected": -632.925048828125,
442
+ "loss": 0.6425,
443
+ "rewards/accuracies": 0.6499999761581421,
444
+ "rewards/chosen": 0.6043787002563477,
445
+ "rewards/margins": 0.15197435021400452,
446
+ "rewards/rejected": 0.45240435004234314,
447
+ "step": 280
448
+ },
449
+ {
450
+ "epoch": 0.32,
451
+ "learning_rate": 4.279768967951605e-06,
452
+ "logits/chosen": -1.0877809524536133,
453
+ "logits/rejected": -1.196732997894287,
454
+ "logps/chosen": -732.0111694335938,
455
+ "logps/rejected": -645.47998046875,
456
+ "loss": 0.5899,
457
+ "rewards/accuracies": 0.7749999761581421,
458
+ "rewards/chosen": 0.5990191698074341,
459
+ "rewards/margins": 0.24647493660449982,
460
+ "rewards/rejected": 0.35254424810409546,
461
+ "step": 290
462
+ },
463
+ {
464
+ "epoch": 0.33,
465
+ "learning_rate": 4.210090324796965e-06,
466
+ "logits/chosen": -1.1224285364151,
467
+ "logits/rejected": -1.2125352621078491,
468
+ "logps/chosen": -701.78173828125,
469
+ "logps/rejected": -531.5074462890625,
470
+ "loss": 0.6029,
471
+ "rewards/accuracies": 0.7250000238418579,
472
+ "rewards/chosen": 0.5454891324043274,
473
+ "rewards/margins": 0.2352382391691208,
474
+ "rewards/rejected": 0.3102509081363678,
475
+ "step": 300
476
+ },
477
+ {
478
+ "epoch": 0.33,
479
+ "eval_logits/chosen": -1.0366891622543335,
480
+ "eval_logits/rejected": -1.1964892148971558,
481
+ "eval_logps/chosen": -793.2078247070312,
482
+ "eval_logps/rejected": -644.8512573242188,
483
+ "eval_loss": 0.6359822750091553,
484
+ "eval_rewards/accuracies": 0.6557788848876953,
485
+ "eval_rewards/chosen": 0.5725387930870056,
486
+ "eval_rewards/margins": 0.16011320054531097,
487
+ "eval_rewards/rejected": 0.4124256670475006,
488
+ "eval_runtime": 182.1079,
489
+ "eval_samples_per_second": 2.186,
490
+ "eval_steps_per_second": 2.186,
491
+ "step": 300
492
+ },
493
+ {
494
+ "epoch": 0.35,
495
+ "learning_rate": 4.137820386518716e-06,
496
+ "logits/chosen": -1.007893681526184,
497
+ "logits/rejected": -1.2096047401428223,
498
+ "logps/chosen": -871.7268676757812,
499
+ "logps/rejected": -572.1101684570312,
500
+ "loss": 0.5544,
501
+ "rewards/accuracies": 0.824999988079071,
502
+ "rewards/chosen": 0.6873822808265686,
503
+ "rewards/margins": 0.35040804743766785,
504
+ "rewards/rejected": 0.33697420358657837,
505
+ "step": 310
506
+ },
507
+ {
508
+ "epoch": 0.36,
509
+ "learning_rate": 4.063068663553778e-06,
510
+ "logits/chosen": -1.0717604160308838,
511
+ "logits/rejected": -1.252614974975586,
512
+ "logps/chosen": -747.0122680664062,
513
+ "logps/rejected": -644.8168334960938,
514
+ "loss": 0.5943,
515
+ "rewards/accuracies": 0.7749999761581421,
516
+ "rewards/chosen": 0.6160334944725037,
517
+ "rewards/margins": 0.25227323174476624,
518
+ "rewards/rejected": 0.36376023292541504,
519
+ "step": 320
520
+ },
521
+ {
522
+ "epoch": 0.37,
523
+ "learning_rate": 3.9859484269805215e-06,
524
+ "logits/chosen": -1.1249014139175415,
525
+ "logits/rejected": -1.2282134294509888,
526
+ "logps/chosen": -852.0465087890625,
527
+ "logps/rejected": -594.2730712890625,
528
+ "loss": 0.6014,
529
+ "rewards/accuracies": 0.7250000238418579,
530
+ "rewards/chosen": 0.6664583683013916,
531
+ "rewards/margins": 0.2528621554374695,
532
+ "rewards/rejected": 0.4135962128639221,
533
+ "step": 330
534
+ },
535
+ {
536
+ "epoch": 0.38,
537
+ "learning_rate": 3.906576536879416e-06,
538
+ "logits/chosen": -1.0270237922668457,
539
+ "logits/rejected": -1.242011308670044,
540
+ "logps/chosen": -827.2908325195312,
541
+ "logps/rejected": -647.3381958007812,
542
+ "loss": 0.5662,
543
+ "rewards/accuracies": 0.7749999761581421,
544
+ "rewards/chosen": 0.6910017728805542,
545
+ "rewards/margins": 0.3151417672634125,
546
+ "rewards/rejected": 0.3758600056171417,
547
+ "step": 340
548
+ },
549
+ {
550
+ "epoch": 0.39,
551
+ "learning_rate": 3.825073265255271e-06,
552
+ "logits/chosen": -1.1385161876678467,
553
+ "logits/rejected": -1.1466325521469116,
554
+ "logps/chosen": -724.5143432617188,
555
+ "logps/rejected": -692.8756103515625,
556
+ "loss": 0.6208,
557
+ "rewards/accuracies": 0.699999988079071,
558
+ "rewards/chosen": 0.6377968192100525,
559
+ "rewards/margins": 0.20125746726989746,
560
+ "rewards/rejected": 0.43653935194015503,
561
+ "step": 350
562
+ },
563
+ {
564
+ "epoch": 0.4,
565
+ "learning_rate": 3.7415621137894055e-06,
566
+ "logits/chosen": -1.1019694805145264,
567
+ "logits/rejected": -1.3558331727981567,
568
+ "logps/chosen": -752.0270385742188,
569
+ "logps/rejected": -574.9373779296875,
570
+ "loss": 0.5653,
571
+ "rewards/accuracies": 0.675000011920929,
572
+ "rewards/chosen": 0.6500095725059509,
573
+ "rewards/margins": 0.3266737759113312,
574
+ "rewards/rejected": 0.3233358561992645,
575
+ "step": 360
576
+ },
577
+ {
578
+ "epoch": 0.41,
579
+ "learning_rate": 3.656169626697889e-06,
580
+ "logits/chosen": -1.1742336750030518,
581
+ "logits/rejected": -1.2145332098007202,
582
+ "logps/chosen": -733.7666015625,
583
+ "logps/rejected": -577.3091430664062,
584
+ "loss": 0.6117,
585
+ "rewards/accuracies": 0.699999988079071,
586
+ "rewards/chosen": 0.6057840585708618,
587
+ "rewards/margins": 0.2176964282989502,
588
+ "rewards/rejected": 0.3880876302719116,
589
+ "step": 370
590
+ },
591
+ {
592
+ "epoch": 0.42,
593
+ "learning_rate": 3.5690251989794443e-06,
594
+ "logits/chosen": -1.1451467275619507,
595
+ "logits/rejected": -1.4090936183929443,
596
+ "logps/chosen": -791.69287109375,
597
+ "logps/rejected": -664.351318359375,
598
+ "loss": 0.5472,
599
+ "rewards/accuracies": 0.8500000238418579,
600
+ "rewards/chosen": 0.7302919626235962,
601
+ "rewards/margins": 0.39029377698898315,
602
+ "rewards/rejected": 0.33999815583229065,
603
+ "step": 380
604
+ },
605
+ {
606
+ "epoch": 0.43,
607
+ "learning_rate": 3.480260880343565e-06,
608
+ "logits/chosen": -1.1370658874511719,
609
+ "logits/rejected": -1.268176555633545,
610
+ "logps/chosen": -704.5770263671875,
611
+ "logps/rejected": -549.7819213867188,
612
+ "loss": 0.5728,
613
+ "rewards/accuracies": 0.699999988079071,
614
+ "rewards/chosen": 0.665199875831604,
615
+ "rewards/margins": 0.32814812660217285,
616
+ "rewards/rejected": 0.3370516896247864,
617
+ "step": 390
618
+ },
619
+ {
620
+ "epoch": 0.45,
621
+ "learning_rate": 3.390011175115956e-06,
622
+ "logits/chosen": -1.0938374996185303,
623
+ "logits/rejected": -1.208974003791809,
624
+ "logps/chosen": -662.2012939453125,
625
+ "logps/rejected": -570.6355590820312,
626
+ "loss": 0.6123,
627
+ "rewards/accuracies": 0.699999988079071,
628
+ "rewards/chosen": 0.6198933124542236,
629
+ "rewards/margins": 0.2259746491909027,
630
+ "rewards/rejected": 0.39391860365867615,
631
+ "step": 400
632
+ },
633
+ {
634
+ "epoch": 0.45,
635
+ "eval_logits/chosen": -1.1220430135726929,
636
+ "eval_logits/rejected": -1.265822172164917,
637
+ "eval_logps/chosen": -787.7749633789062,
638
+ "eval_logps/rejected": -641.9633178710938,
639
+ "eval_loss": 0.6290538907051086,
640
+ "eval_rewards/accuracies": 0.660804033279419,
641
+ "eval_rewards/chosen": 0.6268669962882996,
642
+ "eval_rewards/margins": 0.18556177616119385,
643
+ "eval_rewards/rejected": 0.4413052201271057,
644
+ "eval_runtime": 182.0967,
645
+ "eval_samples_per_second": 2.186,
646
+ "eval_steps_per_second": 2.186,
647
+ "step": 400
648
+ },
649
+ {
650
+ "epoch": 0.46,
651
+ "learning_rate": 3.298412838424503e-06,
652
+ "logits/chosen": -1.1896421909332275,
653
+ "logits/rejected": -1.2548556327819824,
654
+ "logps/chosen": -818.2477416992188,
655
+ "logps/rejected": -607.974365234375,
656
+ "loss": 0.5652,
657
+ "rewards/accuracies": 0.75,
658
+ "rewards/chosen": 0.6872069239616394,
659
+ "rewards/margins": 0.32436543703079224,
660
+ "rewards/rejected": 0.36284154653549194,
661
+ "step": 410
662
+ },
663
+ {
664
+ "epoch": 0.47,
665
+ "learning_rate": 3.205604668974607e-06,
666
+ "logits/chosen": -1.0528907775878906,
667
+ "logits/rejected": -1.1407417058944702,
668
+ "logps/chosen": -798.6719970703125,
669
+ "logps/rejected": -551.5494995117188,
670
+ "loss": 0.5562,
671
+ "rewards/accuracies": 0.7749999761581421,
672
+ "rewards/chosen": 0.7110682725906372,
673
+ "rewards/margins": 0.3393869996070862,
674
+ "rewards/rejected": 0.37168124318122864,
675
+ "step": 420
676
+ },
677
+ {
678
+ "epoch": 0.48,
679
+ "learning_rate": 3.111727298727888e-06,
680
+ "logits/chosen": -1.0838218927383423,
681
+ "logits/rejected": -1.3229072093963623,
682
+ "logps/chosen": -683.1719970703125,
683
+ "logps/rejected": -585.8663330078125,
684
+ "loss": 0.6278,
685
+ "rewards/accuracies": 0.699999988079071,
686
+ "rewards/chosen": 0.5717045068740845,
687
+ "rewards/margins": 0.19018582999706268,
688
+ "rewards/rejected": 0.3815186619758606,
689
+ "step": 430
690
+ },
691
+ {
692
+ "epoch": 0.49,
693
+ "learning_rate": 3.0169229798029698e-06,
694
+ "logits/chosen": -1.1374232769012451,
695
+ "logits/rejected": -1.2010905742645264,
696
+ "logps/chosen": -860.7879638671875,
697
+ "logps/rejected": -513.562744140625,
698
+ "loss": 0.5396,
699
+ "rewards/accuracies": 0.75,
700
+ "rewards/chosen": 0.7182751893997192,
701
+ "rewards/margins": 0.4120573103427887,
702
+ "rewards/rejected": 0.30621784925460815,
703
+ "step": 440
704
+ },
705
+ {
706
+ "epoch": 0.5,
707
+ "learning_rate": 2.9213353689212337e-06,
708
+ "logits/chosen": -1.1559637784957886,
709
+ "logits/rejected": -1.3046997785568237,
710
+ "logps/chosen": -728.3955078125,
711
+ "logps/rejected": -552.3156127929688,
712
+ "loss": 0.5476,
713
+ "rewards/accuracies": 0.7749999761581421,
714
+ "rewards/chosen": 0.6234858632087708,
715
+ "rewards/margins": 0.37362077832221985,
716
+ "rewards/rejected": 0.2498650997877121,
717
+ "step": 450
718
+ },
719
+ {
720
+ "epoch": 0.51,
721
+ "learning_rate": 2.8251093097241895e-06,
722
+ "logits/chosen": -1.0692123174667358,
723
+ "logits/rejected": -1.2266539335250854,
724
+ "logps/chosen": -792.2379150390625,
725
+ "logps/rejected": -661.500732421875,
726
+ "loss": 0.615,
727
+ "rewards/accuracies": 0.7749999761581421,
728
+ "rewards/chosen": 0.6842690706253052,
729
+ "rewards/margins": 0.26619952917099,
730
+ "rewards/rejected": 0.4180695414543152,
731
+ "step": 460
732
+ },
733
+ {
734
+ "epoch": 0.52,
735
+ "learning_rate": 2.7283906132923104e-06,
736
+ "logits/chosen": -1.1081640720367432,
737
+ "logits/rejected": -1.196122407913208,
738
+ "logps/chosen": -861.2401123046875,
739
+ "logps/rejected": -630.8099365234375,
740
+ "loss": 0.5369,
741
+ "rewards/accuracies": 0.824999988079071,
742
+ "rewards/chosen": 0.7281755208969116,
743
+ "rewards/margins": 0.4001568853855133,
744
+ "rewards/rejected": 0.3280186355113983,
745
+ "step": 470
746
+ },
747
+ {
748
+ "epoch": 0.54,
749
+ "learning_rate": 2.6313258371978996e-06,
750
+ "logits/chosen": -1.0956523418426514,
751
+ "logits/rejected": -1.2124046087265015,
752
+ "logps/chosen": -806.178466796875,
753
+ "logps/rejected": -556.8995361328125,
754
+ "loss": 0.5704,
755
+ "rewards/accuracies": 0.7749999761581421,
756
+ "rewards/chosen": 0.6232426762580872,
757
+ "rewards/margins": 0.3145415782928467,
758
+ "rewards/rejected": 0.3087010979652405,
759
+ "step": 480
760
+ },
761
+ {
762
+ "epoch": 0.55,
763
+ "learning_rate": 2.5340620634268167e-06,
764
+ "logits/chosen": -1.1129533052444458,
765
+ "logits/rejected": -1.3338663578033447,
766
+ "logps/chosen": -722.3619995117188,
767
+ "logps/rejected": -528.3717041015625,
768
+ "loss": 0.5459,
769
+ "rewards/accuracies": 0.7749999761581421,
770
+ "rewards/chosen": 0.6257763504981995,
771
+ "rewards/margins": 0.37175193428993225,
772
+ "rewards/rejected": 0.2540244162082672,
773
+ "step": 490
774
+ },
775
+ {
776
+ "epoch": 0.56,
777
+ "learning_rate": 2.436746675505545e-06,
778
+ "logits/chosen": -1.0671157836914062,
779
+ "logits/rejected": -1.2726842164993286,
780
+ "logps/chosen": -709.8563842773438,
781
+ "logps/rejected": -538.7957763671875,
782
+ "loss": 0.5596,
783
+ "rewards/accuracies": 0.8500000238418579,
784
+ "rewards/chosen": 0.6499138474464417,
785
+ "rewards/margins": 0.3205423951148987,
786
+ "rewards/rejected": 0.3293713927268982,
787
+ "step": 500
788
+ },
789
+ {
790
+ "epoch": 0.56,
791
+ "eval_logits/chosen": -1.085227131843567,
792
+ "eval_logits/rejected": -1.2330302000045776,
793
+ "eval_logps/chosen": -790.7927856445312,
794
+ "eval_logps/rejected": -646.7930297851562,
795
+ "eval_loss": 0.6230102181434631,
796
+ "eval_rewards/accuracies": 0.6683416962623596,
797
+ "eval_rewards/chosen": 0.5966897010803223,
798
+ "eval_rewards/margins": 0.20368172228336334,
799
+ "eval_rewards/rejected": 0.39300796389579773,
800
+ "eval_runtime": 182.1485,
801
+ "eval_samples_per_second": 2.185,
802
+ "eval_steps_per_second": 2.185,
803
+ "step": 500
804
+ },
805
+ {
806
+ "epoch": 0.57,
807
+ "learning_rate": 2.3395271351713515e-06,
808
+ "logits/chosen": -1.0355545282363892,
809
+ "logits/rejected": -1.2773510217666626,
810
+ "logps/chosen": -846.6865234375,
811
+ "logps/rejected": -515.6051635742188,
812
+ "loss": 0.5052,
813
+ "rewards/accuracies": 0.8500000238418579,
814
+ "rewards/chosen": 0.7559247016906738,
815
+ "rewards/margins": 0.48392027616500854,
816
+ "rewards/rejected": 0.2720043659210205,
817
+ "step": 510
818
+ },
819
+ {
820
+ "epoch": 0.58,
821
+ "learning_rate": 2.2425507589239154e-06,
822
+ "logits/chosen": -1.0901484489440918,
823
+ "logits/rejected": -1.2016217708587646,
824
+ "logps/chosen": -946.5360107421875,
825
+ "logps/rejected": -568.994384765625,
826
+ "loss": 0.519,
827
+ "rewards/accuracies": 0.7749999761581421,
828
+ "rewards/chosen": 0.8060197830200195,
829
+ "rewards/margins": 0.48728522658348083,
830
+ "rewards/rejected": 0.3187345862388611,
831
+ "step": 520
832
+ },
833
+ {
834
+ "epoch": 0.59,
835
+ "learning_rate": 2.145964494797051e-06,
836
+ "logits/chosen": -1.0998233556747437,
837
+ "logits/rejected": -1.2087761163711548,
838
+ "logps/chosen": -728.5558471679688,
839
+ "logps/rejected": -671.7276611328125,
840
+ "loss": 0.6116,
841
+ "rewards/accuracies": 0.699999988079071,
842
+ "rewards/chosen": 0.6140581965446472,
843
+ "rewards/margins": 0.23174139857292175,
844
+ "rewards/rejected": 0.3823167681694031,
845
+ "step": 530
846
+ },
847
+ {
848
+ "epoch": 0.6,
849
+ "learning_rate": 2.049914699688762e-06,
850
+ "logits/chosen": -1.1990127563476562,
851
+ "logits/rejected": -1.2407617568969727,
852
+ "logps/chosen": -739.6600341796875,
853
+ "logps/rejected": -640.65380859375,
854
+ "loss": 0.5987,
855
+ "rewards/accuracies": 0.7250000238418579,
856
+ "rewards/chosen": 0.6110137104988098,
857
+ "rewards/margins": 0.2484373301267624,
858
+ "rewards/rejected": 0.36257636547088623,
859
+ "step": 540
860
+ },
861
+ {
862
+ "epoch": 0.61,
863
+ "learning_rate": 1.954546917587033e-06,
864
+ "logits/chosen": -1.1063880920410156,
865
+ "logits/rejected": -1.2011101245880127,
866
+ "logps/chosen": -756.6212768554688,
867
+ "logps/rejected": -455.97979736328125,
868
+ "loss": 0.5123,
869
+ "rewards/accuracies": 0.8500000238418579,
870
+ "rewards/chosen": 0.7002598643302917,
871
+ "rewards/margins": 0.47388529777526855,
872
+ "rewards/rejected": 0.22637462615966797,
873
+ "step": 550
874
+ },
875
+ {
876
+ "epoch": 0.62,
877
+ "learning_rate": 1.8600056590274355e-06,
878
+ "logits/chosen": -1.095252513885498,
879
+ "logits/rejected": -1.1555767059326172,
880
+ "logps/chosen": -730.5572509765625,
881
+ "logps/rejected": -615.13134765625,
882
+ "loss": 0.619,
883
+ "rewards/accuracies": 0.675000011920929,
884
+ "rewards/chosen": 0.6304242610931396,
885
+ "rewards/margins": 0.23751536011695862,
886
+ "rewards/rejected": 0.3929089605808258,
887
+ "step": 560
888
+ },
889
+ {
890
+ "epoch": 0.64,
891
+ "learning_rate": 1.766434182116708e-06,
892
+ "logits/chosen": -1.173472285270691,
893
+ "logits/rejected": -1.1808496713638306,
894
+ "logps/chosen": -754.0181884765625,
895
+ "logps/rejected": -660.9012451171875,
896
+ "loss": 0.6232,
897
+ "rewards/accuracies": 0.675000011920929,
898
+ "rewards/chosen": 0.5536493062973022,
899
+ "rewards/margins": 0.21175837516784668,
900
+ "rewards/rejected": 0.34189099073410034,
901
+ "step": 570
902
+ },
903
+ {
904
+ "epoch": 0.65,
905
+ "learning_rate": 1.6739742754541515e-06,
906
+ "logits/chosen": -0.9952909350395203,
907
+ "logits/rejected": -1.2373504638671875,
908
+ "logps/chosen": -740.7738037109375,
909
+ "logps/rejected": -658.3294067382812,
910
+ "loss": 0.57,
911
+ "rewards/accuracies": 0.75,
912
+ "rewards/chosen": 0.6436474323272705,
913
+ "rewards/margins": 0.3406828045845032,
914
+ "rewards/rejected": 0.30296462774276733,
915
+ "step": 580
916
+ },
917
+ {
918
+ "epoch": 0.66,
919
+ "learning_rate": 1.582766043279752e-06,
920
+ "logits/chosen": -1.0711066722869873,
921
+ "logits/rejected": -1.2044506072998047,
922
+ "logps/chosen": -700.0191650390625,
923
+ "logps/rejected": -573.0767211914062,
924
+ "loss": 0.5885,
925
+ "rewards/accuracies": 0.7250000238418579,
926
+ "rewards/chosen": 0.5518115758895874,
927
+ "rewards/margins": 0.28940147161483765,
928
+ "rewards/rejected": 0.26241010427474976,
929
+ "step": 590
930
+ },
931
+ {
932
+ "epoch": 0.67,
933
+ "learning_rate": 1.4929476931746167e-06,
934
+ "logits/chosen": -0.9843810796737671,
935
+ "logits/rejected": -1.1629724502563477,
936
+ "logps/chosen": -815.2390747070312,
937
+ "logps/rejected": -556.5223999023438,
938
+ "loss": 0.5382,
939
+ "rewards/accuracies": 0.8500000238418579,
940
+ "rewards/chosen": 0.6965817213058472,
941
+ "rewards/margins": 0.40665000677108765,
942
+ "rewards/rejected": 0.2899317145347595,
943
+ "step": 600
944
+ },
945
+ {
946
+ "epoch": 0.67,
947
+ "eval_logits/chosen": -1.054700493812561,
948
+ "eval_logits/rejected": -1.2033686637878418,
949
+ "eval_logps/chosen": -793.2485961914062,
950
+ "eval_logps/rejected": -650.0925903320312,
951
+ "eval_loss": 0.6198973059654236,
952
+ "eval_rewards/accuracies": 0.6708542704582214,
953
+ "eval_rewards/chosen": 0.5721314549446106,
954
+ "eval_rewards/margins": 0.21211864054203033,
955
+ "eval_rewards/rejected": 0.36001279950141907,
956
+ "eval_runtime": 182.0552,
957
+ "eval_samples_per_second": 2.186,
958
+ "eval_steps_per_second": 2.186,
959
+ "step": 600
960
+ },
961
+ {
962
+ "epoch": 0.68,
963
+ "learning_rate": 1.4046553266354126e-06,
964
+ "logits/chosen": -1.1579875946044922,
965
+ "logits/rejected": -1.1627570390701294,
966
+ "logps/chosen": -710.740966796875,
967
+ "logps/rejected": -705.4386596679688,
968
+ "loss": 0.651,
969
+ "rewards/accuracies": 0.6000000238418579,
970
+ "rewards/chosen": 0.5279535055160522,
971
+ "rewards/margins": 0.15107524394989014,
972
+ "rewards/rejected": 0.3768783211708069,
973
+ "step": 610
974
+ },
975
+ {
976
+ "epoch": 0.69,
977
+ "learning_rate": 1.318022732840141e-06,
978
+ "logits/chosen": -1.082596778869629,
979
+ "logits/rejected": -1.2048799991607666,
980
+ "logps/chosen": -773.497802734375,
981
+ "logps/rejected": -556.2584838867188,
982
+ "loss": 0.5922,
983
+ "rewards/accuracies": 0.800000011920929,
984
+ "rewards/chosen": 0.5189529657363892,
985
+ "rewards/margins": 0.2554023265838623,
986
+ "rewards/rejected": 0.26355063915252686,
987
+ "step": 620
988
+ },
989
+ {
990
+ "epoch": 0.7,
991
+ "learning_rate": 1.2331811859177722e-06,
992
+ "logits/chosen": -1.1044152975082397,
993
+ "logits/rejected": -1.3350199460983276,
994
+ "logps/chosen": -763.0442504882812,
995
+ "logps/rejected": -605.748046875,
996
+ "loss": 0.5383,
997
+ "rewards/accuracies": 0.75,
998
+ "rewards/chosen": 0.6514481902122498,
999
+ "rewards/margins": 0.42245060205459595,
1000
+ "rewards/rejected": 0.2289975881576538,
1001
+ "step": 630
1002
+ },
1003
+ {
1004
+ "epoch": 0.71,
1005
+ "learning_rate": 1.150259246028921e-06,
1006
+ "logits/chosen": -0.9752397537231445,
1007
+ "logits/rejected": -1.1639258861541748,
1008
+ "logps/chosen": -736.8223266601562,
1009
+ "logps/rejected": -610.3005981445312,
1010
+ "loss": 0.5711,
1011
+ "rewards/accuracies": 0.75,
1012
+ "rewards/chosen": 0.6042425036430359,
1013
+ "rewards/margins": 0.3154502809047699,
1014
+ "rewards/rejected": 0.288792222738266,
1015
+ "step": 640
1016
+ },
1017
+ {
1018
+ "epoch": 0.72,
1019
+ "learning_rate": 1.0693825645589887e-06,
1020
+ "logits/chosen": -0.9845565557479858,
1021
+ "logits/rejected": -1.205209732055664,
1022
+ "logps/chosen": -778.3556518554688,
1023
+ "logps/rejected": -546.72265625,
1024
+ "loss": 0.5316,
1025
+ "rewards/accuracies": 0.800000011920929,
1026
+ "rewards/chosen": 0.6856497526168823,
1027
+ "rewards/margins": 0.4366929531097412,
1028
+ "rewards/rejected": 0.24895676970481873,
1029
+ "step": 650
1030
+ },
1031
+ {
1032
+ "epoch": 0.74,
1033
+ "learning_rate": 9.9067369371897e-07,
1034
+ "logits/chosen": -1.1334224939346313,
1035
+ "logits/rejected": -1.261704683303833,
1036
+ "logps/chosen": -795.8439331054688,
1037
+ "logps/rejected": -630.4856567382812,
1038
+ "loss": 0.5529,
1039
+ "rewards/accuracies": 0.7749999761581421,
1040
+ "rewards/chosen": 0.6260807514190674,
1041
+ "rewards/margins": 0.3732090890407562,
1042
+ "rewards/rejected": 0.25287163257598877,
1043
+ "step": 660
1044
+ },
1045
+ {
1046
+ "epoch": 0.75,
1047
+ "learning_rate": 9.14251900842432e-07,
1048
+ "logits/chosen": -1.0554249286651611,
1049
+ "logits/rejected": -1.1402459144592285,
1050
+ "logps/chosen": -697.9842529296875,
1051
+ "logps/rejected": -549.4150390625,
1052
+ "loss": 0.5811,
1053
+ "rewards/accuracies": 0.7749999761581421,
1054
+ "rewards/chosen": 0.530659556388855,
1055
+ "rewards/margins": 0.29245835542678833,
1056
+ "rewards/rejected": 0.23820118606090546,
1057
+ "step": 670
1058
+ },
1059
+ {
1060
+ "epoch": 0.76,
1061
+ "learning_rate": 8.402329876600462e-07,
1062
+ "logits/chosen": -1.0341722965240479,
1063
+ "logits/rejected": -1.126814603805542,
1064
+ "logps/chosen": -864.65966796875,
1065
+ "logps/rejected": -632.652099609375,
1066
+ "loss": 0.5377,
1067
+ "rewards/accuracies": 0.7749999761581421,
1068
+ "rewards/chosen": 0.72022545337677,
1069
+ "rewards/margins": 0.4036920964717865,
1070
+ "rewards/rejected": 0.3165333867073059,
1071
+ "step": 680
1072
+ },
1073
+ {
1074
+ "epoch": 0.77,
1075
+ "learning_rate": 7.687291148255527e-07,
1076
+ "logits/chosen": -1.0247344970703125,
1077
+ "logits/rejected": -1.1219924688339233,
1078
+ "logps/chosen": -944.3568115234375,
1079
+ "logps/rejected": -662.272216796875,
1080
+ "loss": 0.5183,
1081
+ "rewards/accuracies": 0.7749999761581421,
1082
+ "rewards/chosen": 0.8120288848876953,
1083
+ "rewards/margins": 0.4406498968601227,
1084
+ "rewards/rejected": 0.37137895822525024,
1085
+ "step": 690
1086
+ },
1087
+ {
1088
+ "epoch": 0.78,
1089
+ "learning_rate": 6.998486319590323e-07,
1090
+ "logits/chosen": -1.0355504751205444,
1091
+ "logits/rejected": -1.0869207382202148,
1092
+ "logps/chosen": -768.5548095703125,
1093
+ "logps/rejected": -638.7791748046875,
1094
+ "loss": 0.5952,
1095
+ "rewards/accuracies": 0.7250000238418579,
1096
+ "rewards/chosen": 0.6144096851348877,
1097
+ "rewards/margins": 0.27622345089912415,
1098
+ "rewards/rejected": 0.33818623423576355,
1099
+ "step": 700
1100
+ },
1101
+ {
1102
+ "epoch": 0.78,
1103
+ "eval_logits/chosen": -1.0323765277862549,
1104
+ "eval_logits/rejected": -1.182665228843689,
1105
+ "eval_logps/chosen": -794.96044921875,
1106
+ "eval_logps/rejected": -652.0419921875,
1107
+ "eval_loss": 0.6185528635978699,
1108
+ "eval_rewards/accuracies": 0.6783919334411621,
1109
+ "eval_rewards/chosen": 0.5550140738487244,
1110
+ "eval_rewards/margins": 0.21449486911296844,
1111
+ "eval_rewards/rejected": 0.3405191898345947,
1112
+ "eval_runtime": 182.2003,
1113
+ "eval_samples_per_second": 2.184,
1114
+ "eval_steps_per_second": 2.184,
1115
+ "step": 700
1116
+ },
1117
+ {
1118
+ "epoch": 0.79,
1119
+ "learning_rate": 6.336959134650278e-07,
1120
+ "logits/chosen": -1.1061818599700928,
1121
+ "logits/rejected": -1.1679563522338867,
1122
+ "logps/chosen": -823.1553955078125,
1123
+ "logps/rejected": -556.3447875976562,
1124
+ "loss": 0.5435,
1125
+ "rewards/accuracies": 0.7749999761581421,
1126
+ "rewards/chosen": 0.6446937322616577,
1127
+ "rewards/margins": 0.3869627118110657,
1128
+ "rewards/rejected": 0.2577310800552368,
1129
+ "step": 710
1130
+ },
1131
+ {
1132
+ "epoch": 0.8,
1133
+ "learning_rate": 5.703712003742965e-07,
1134
+ "logits/chosen": -1.013837456703186,
1135
+ "logits/rejected": -1.2135231494903564,
1136
+ "logps/chosen": -790.27490234375,
1137
+ "logps/rejected": -591.6689453125,
1138
+ "loss": 0.5499,
1139
+ "rewards/accuracies": 0.75,
1140
+ "rewards/chosen": 0.6053417325019836,
1141
+ "rewards/margins": 0.37499576807022095,
1142
+ "rewards/rejected": 0.2303459644317627,
1143
+ "step": 720
1144
+ },
1145
+ {
1146
+ "epoch": 0.81,
1147
+ "learning_rate": 5.099704484488569e-07,
1148
+ "logits/chosen": -1.0917243957519531,
1149
+ "logits/rejected": -1.1174938678741455,
1150
+ "logps/chosen": -800.6257934570312,
1151
+ "logps/rejected": -576.2686767578125,
1152
+ "loss": 0.5762,
1153
+ "rewards/accuracies": 0.824999988079071,
1154
+ "rewards/chosen": 0.5678650736808777,
1155
+ "rewards/margins": 0.30998459458351135,
1156
+ "rewards/rejected": 0.25788044929504395,
1157
+ "step": 730
1158
+ },
1159
+ {
1160
+ "epoch": 0.82,
1161
+ "learning_rate": 4.525851827804881e-07,
1162
+ "logits/chosen": -1.0324584245681763,
1163
+ "logits/rejected": -1.2507680654525757,
1164
+ "logps/chosen": -849.1965942382812,
1165
+ "logps/rejected": -620.4736328125,
1166
+ "loss": 0.5563,
1167
+ "rewards/accuracies": 0.824999988079071,
1168
+ "rewards/chosen": 0.6641701459884644,
1169
+ "rewards/margins": 0.3699476420879364,
1170
+ "rewards/rejected": 0.2942224442958832,
1171
+ "step": 740
1172
+ },
1173
+ {
1174
+ "epoch": 0.84,
1175
+ "learning_rate": 3.983023591030113e-07,
1176
+ "logits/chosen": -1.0120137929916382,
1177
+ "logits/rejected": -1.2262170314788818,
1178
+ "logps/chosen": -834.7081909179688,
1179
+ "logps/rejected": -564.6629638671875,
1180
+ "loss": 0.5074,
1181
+ "rewards/accuracies": 0.8999999761581421,
1182
+ "rewards/chosen": 0.6688768267631531,
1183
+ "rewards/margins": 0.48412173986434937,
1184
+ "rewards/rejected": 0.1847551316022873,
1185
+ "step": 750
1186
+ },
1187
+ {
1188
+ "epoch": 0.85,
1189
+ "learning_rate": 3.472042320285071e-07,
1190
+ "logits/chosen": -1.0783493518829346,
1191
+ "logits/rejected": -1.2198795080184937,
1192
+ "logps/chosen": -753.7183837890625,
1193
+ "logps/rejected": -542.9868774414062,
1194
+ "loss": 0.5736,
1195
+ "rewards/accuracies": 0.675000011920929,
1196
+ "rewards/chosen": 0.5568192005157471,
1197
+ "rewards/margins": 0.3265761733055115,
1198
+ "rewards/rejected": 0.2302430123090744,
1199
+ "step": 760
1200
+ },
1201
+ {
1202
+ "epoch": 0.86,
1203
+ "learning_rate": 2.9936823040713464e-07,
1204
+ "logits/chosen": -1.0335241556167603,
1205
+ "logits/rejected": -1.1321046352386475,
1206
+ "logps/chosen": -708.8296508789062,
1207
+ "logps/rejected": -637.4175415039062,
1208
+ "loss": 0.587,
1209
+ "rewards/accuracies": 0.675000011920929,
1210
+ "rewards/chosen": 0.6139867901802063,
1211
+ "rewards/margins": 0.3065374493598938,
1212
+ "rewards/rejected": 0.3074493110179901,
1213
+ "step": 770
1214
+ },
1215
+ {
1216
+ "epoch": 0.87,
1217
+ "learning_rate": 2.5486683999940335e-07,
1218
+ "logits/chosen": -1.0177851915359497,
1219
+ "logits/rejected": -1.2519065141677856,
1220
+ "logps/chosen": -767.447021484375,
1221
+ "logps/rejected": -714.9308471679688,
1222
+ "loss": 0.5753,
1223
+ "rewards/accuracies": 0.7749999761581421,
1224
+ "rewards/chosen": 0.6363558769226074,
1225
+ "rewards/margins": 0.33481666445732117,
1226
+ "rewards/rejected": 0.30153924226760864,
1227
+ "step": 780
1228
+ },
1229
+ {
1230
+ "epoch": 0.88,
1231
+ "learning_rate": 2.137674936387049e-07,
1232
+ "logits/chosen": -1.1295353174209595,
1233
+ "logits/rejected": -1.1564512252807617,
1234
+ "logps/chosen": -883.1470947265625,
1235
+ "logps/rejected": -631.2457275390625,
1236
+ "loss": 0.5517,
1237
+ "rewards/accuracies": 0.800000011920929,
1238
+ "rewards/chosen": 0.695419430732727,
1239
+ "rewards/margins": 0.38906994462013245,
1240
+ "rewards/rejected": 0.30634939670562744,
1241
+ "step": 790
1242
+ },
1243
+ {
1244
+ "epoch": 0.89,
1245
+ "learning_rate": 1.7613246905052812e-07,
1246
+ "logits/chosen": -1.0573675632476807,
1247
+ "logits/rejected": -1.1950371265411377,
1248
+ "logps/chosen": -760.9122314453125,
1249
+ "logps/rejected": -630.8699340820312,
1250
+ "loss": 0.5792,
1251
+ "rewards/accuracies": 0.6499999761581421,
1252
+ "rewards/chosen": 0.5912384390830994,
1253
+ "rewards/margins": 0.32750827074050903,
1254
+ "rewards/rejected": 0.2637301981449127,
1255
+ "step": 800
1256
+ },
1257
+ {
1258
+ "epoch": 0.89,
1259
+ "eval_logits/chosen": -1.0307811498641968,
1260
+ "eval_logits/rejected": -1.1811619997024536,
1261
+ "eval_logps/chosen": -795.125,
1262
+ "eval_logps/rejected": -652.2705078125,
1263
+ "eval_loss": 0.6182109117507935,
1264
+ "eval_rewards/accuracies": 0.6783919334411621,
1265
+ "eval_rewards/chosen": 0.553367018699646,
1266
+ "eval_rewards/margins": 0.2151338756084442,
1267
+ "eval_rewards/rejected": 0.3382331430912018,
1268
+ "eval_runtime": 182.0989,
1269
+ "eval_samples_per_second": 2.186,
1270
+ "eval_steps_per_second": 2.186,
1271
+ "step": 800
1272
+ },
1273
+ {
1274
+ "epoch": 0.9,
1275
+ "learning_rate": 1.4201879448319356e-07,
1276
+ "logits/chosen": -1.0675430297851562,
1277
+ "logits/rejected": -1.1287426948547363,
1278
+ "logps/chosen": -711.8330078125,
1279
+ "logps/rejected": -551.1827392578125,
1280
+ "loss": 0.5842,
1281
+ "rewards/accuracies": 0.75,
1282
+ "rewards/chosen": 0.5683950185775757,
1283
+ "rewards/margins": 0.2922239601612091,
1284
+ "rewards/rejected": 0.2761710286140442,
1285
+ "step": 810
1286
+ },
1287
+ {
1288
+ "epoch": 0.91,
1289
+ "learning_rate": 1.1147816229310549e-07,
1290
+ "logits/chosen": -1.016808032989502,
1291
+ "logits/rejected": -1.1210168600082397,
1292
+ "logps/chosen": -691.964599609375,
1293
+ "logps/rejected": -732.0343017578125,
1294
+ "loss": 0.637,
1295
+ "rewards/accuracies": 0.6000000238418579,
1296
+ "rewards/chosen": 0.5624631643295288,
1297
+ "rewards/margins": 0.19316788017749786,
1298
+ "rewards/rejected": 0.3692953288555145,
1299
+ "step": 820
1300
+ },
1301
+ {
1302
+ "epoch": 0.93,
1303
+ "learning_rate": 8.455685061547119e-08,
1304
+ "logits/chosen": -1.1123645305633545,
1305
+ "logits/rejected": -1.132232904434204,
1306
+ "logps/chosen": -859.2161254882812,
1307
+ "logps/rejected": -662.1595458984375,
1308
+ "loss": 0.5684,
1309
+ "rewards/accuracies": 0.75,
1310
+ "rewards/chosen": 0.7145462036132812,
1311
+ "rewards/margins": 0.3542642295360565,
1312
+ "rewards/rejected": 0.36028194427490234,
1313
+ "step": 830
1314
+ },
1315
+ {
1316
+ "epoch": 0.94,
1317
+ "learning_rate": 6.129565323916814e-08,
1318
+ "logits/chosen": -1.0635792016983032,
1319
+ "logits/rejected": -1.1541740894317627,
1320
+ "logps/chosen": -795.508056640625,
1321
+ "logps/rejected": -634.9236450195312,
1322
+ "loss": 0.5683,
1323
+ "rewards/accuracies": 0.7749999761581421,
1324
+ "rewards/chosen": 0.6695567965507507,
1325
+ "rewards/margins": 0.3517037034034729,
1326
+ "rewards/rejected": 0.3178531527519226,
1327
+ "step": 840
1328
+ },
1329
+ {
1330
+ "epoch": 0.95,
1331
+ "learning_rate": 4.1729817792030004e-08,
1332
+ "logits/chosen": -1.1519498825073242,
1333
+ "logits/rejected": -1.1835066080093384,
1334
+ "logps/chosen": -785.9249267578125,
1335
+ "logps/rejected": -617.5560913085938,
1336
+ "loss": 0.579,
1337
+ "rewards/accuracies": 0.7749999761581421,
1338
+ "rewards/chosen": 0.589630126953125,
1339
+ "rewards/margins": 0.2917063534259796,
1340
+ "rewards/rejected": 0.297923743724823,
1341
+ "step": 850
1342
+ },
1343
+ {
1344
+ "epoch": 0.96,
1345
+ "learning_rate": 2.588899233021358e-08,
1346
+ "logits/chosen": -1.0503435134887695,
1347
+ "logits/rejected": -1.218450665473938,
1348
+ "logps/chosen": -840.5927734375,
1349
+ "logps/rejected": -520.814208984375,
1350
+ "loss": 0.5205,
1351
+ "rewards/accuracies": 0.8999999761581421,
1352
+ "rewards/chosen": 0.6971001029014587,
1353
+ "rewards/margins": 0.4537246823310852,
1354
+ "rewards/rejected": 0.24337545037269592,
1355
+ "step": 860
1356
+ },
1357
+ {
1358
+ "epoch": 0.97,
1359
+ "learning_rate": 1.3797180412583322e-08,
1360
+ "logits/chosen": -1.0805847644805908,
1361
+ "logits/rejected": -1.2713693380355835,
1362
+ "logps/chosen": -800.3570556640625,
1363
+ "logps/rejected": -632.8238525390625,
1364
+ "loss": 0.5758,
1365
+ "rewards/accuracies": 0.7749999761581421,
1366
+ "rewards/chosen": 0.5660972595214844,
1367
+ "rewards/margins": 0.31249552965164185,
1368
+ "rewards/rejected": 0.25360172986984253,
1369
+ "step": 870
1370
+ },
1371
+ {
1372
+ "epoch": 0.98,
1373
+ "learning_rate": 5.4727047281821764e-09,
1374
+ "logits/chosen": -1.0479118824005127,
1375
+ "logits/rejected": -1.3037327527999878,
1376
+ "logps/chosen": -783.6592407226562,
1377
+ "logps/rejected": -529.0799560546875,
1378
+ "loss": 0.5171,
1379
+ "rewards/accuracies": 0.8999999761581421,
1380
+ "rewards/chosen": 0.6520611047744751,
1381
+ "rewards/margins": 0.47154098749160767,
1382
+ "rewards/rejected": 0.18052014708518982,
1383
+ "step": 880
1384
+ },
1385
+ {
1386
+ "epoch": 0.99,
1387
+ "learning_rate": 9.281793319140808e-10,
1388
+ "logits/chosen": -1.0751326084136963,
1389
+ "logits/rejected": -1.2520333528518677,
1390
+ "logps/chosen": -834.0614013671875,
1391
+ "logps/rejected": -488.2327575683594,
1392
+ "loss": 0.5113,
1393
+ "rewards/accuracies": 0.800000011920929,
1394
+ "rewards/chosen": 0.6582953929901123,
1395
+ "rewards/margins": 0.4814949631690979,
1396
+ "rewards/rejected": 0.17680040001869202,
1397
+ "step": 890
1398
+ },
1399
+ {
1400
+ "epoch": 1.0,
1401
+ "step": 897,
1402
+ "total_flos": 0.0,
1403
+ "train_loss": 0.06080360662445443,
1404
+ "train_runtime": 395.6281,
1405
+ "train_samples_per_second": 9.069,
1406
+ "train_steps_per_second": 2.267
1407
+ }
1408
+ ],
1409
+ "logging_steps": 10,
1410
+ "max_steps": 897,
1411
+ "num_input_tokens_seen": 0,
1412
+ "num_train_epochs": 1,
1413
+ "save_steps": 100,
1414
+ "total_flos": 0.0,
1415
+ "train_batch_size": 1,
1416
+ "trial_name": null,
1417
+ "trial_params": null
1418
+ }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f4c92da48b380a752456a51c563928238f349f671e997c4a78a9d341643e56f5
3
  size 4856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:004c3899205019a7a4570b0b56dde6a4993781fe5679c59032786d1ebd7a0662
3
  size 4856