UphamProjects commited on
Commit
4f659cc
1 Parent(s): 2f452ca

Delete oasst-sft-4-pythia-12b-epoch-3.5

Browse files
oasst-sft-4-pythia-12b-epoch-3.5/.gitattributes DELETED
@@ -1,34 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tflite filter=lfs diff=lfs merge=lfs -text
29
- *.tgz filter=lfs diff=lfs merge=lfs -text
30
- *.wasm filter=lfs diff=lfs merge=lfs -text
31
- *.xz filter=lfs diff=lfs merge=lfs -text
32
- *.zip filter=lfs diff=lfs merge=lfs -text
33
- *.zst filter=lfs diff=lfs merge=lfs -text
34
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
oasst-sft-4-pythia-12b-epoch-3.5/LICENSE DELETED
@@ -1,201 +0,0 @@
1
- Apache License
2
- Version 2.0, January 2004
3
- http://www.apache.org/licenses/
4
-
5
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
-
7
- 1. Definitions.
8
-
9
- "License" shall mean the terms and conditions for use, reproduction,
10
- and distribution as defined by Sections 1 through 9 of this document.
11
-
12
- "Licensor" shall mean the copyright owner or entity authorized by
13
- the copyright owner that is granting the License.
14
-
15
- "Legal Entity" shall mean the union of the acting entity and all
16
- other entities that control, are controlled by, or are under common
17
- control with that entity. For the purposes of this definition,
18
- "control" means (i) the power, direct or indirect, to cause the
19
- direction or management of such entity, whether by contract or
20
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
- outstanding shares, or (iii) beneficial ownership of such entity.
22
-
23
- "You" (or "Your") shall mean an individual or Legal Entity
24
- exercising permissions granted by this License.
25
-
26
- "Source" form shall mean the preferred form for making modifications,
27
- including but not limited to software source code, documentation
28
- source, and configuration files.
29
-
30
- "Object" form shall mean any form resulting from mechanical
31
- transformation or translation of a Source form, including but
32
- not limited to compiled object code, generated documentation,
33
- and conversions to other media types.
34
-
35
- "Work" shall mean the work of authorship, whether in Source or
36
- Object form, made available under the License, as indicated by a
37
- copyright notice that is included in or attached to the work
38
- (an example is provided in the Appendix below).
39
-
40
- "Derivative Works" shall mean any work, whether in Source or Object
41
- form, that is based on (or derived from) the Work and for which the
42
- editorial revisions, annotations, elaborations, or other modifications
43
- represent, as a whole, an original work of authorship. For the purposes
44
- of this License, Derivative Works shall not include works that remain
45
- separable from, or merely link (or bind by name) to the interfaces of,
46
- the Work and Derivative Works thereof.
47
-
48
- "Contribution" shall mean any work of authorship, including
49
- the original version of the Work and any modifications or additions
50
- to that Work or Derivative Works thereof, that is intentionally
51
- submitted to Licensor for inclusion in the Work by the copyright owner
52
- or by an individual or Legal Entity authorized to submit on behalf of
53
- the copyright owner. For the purposes of this definition, "submitted"
54
- means any form of electronic, verbal, or written communication sent
55
- to the Licensor or its representatives, including but not limited to
56
- communication on electronic mailing lists, source code control systems,
57
- and issue tracking systems that are managed by, or on behalf of, the
58
- Licensor for the purpose of discussing and improving the Work, but
59
- excluding communication that is conspicuously marked or otherwise
60
- designated in writing by the copyright owner as "Not a Contribution."
61
-
62
- "Contributor" shall mean Licensor and any individual or Legal Entity
63
- on behalf of whom a Contribution has been received by Licensor and
64
- subsequently incorporated within the Work.
65
-
66
- 2. Grant of Copyright License. Subject to the terms and conditions of
67
- this License, each Contributor hereby grants to You a perpetual,
68
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
- copyright license to reproduce, prepare Derivative Works of,
70
- publicly display, publicly perform, sublicense, and distribute the
71
- Work and such Derivative Works in Source or Object form.
72
-
73
- 3. Grant of Patent License. Subject to the terms and conditions of
74
- this License, each Contributor hereby grants to You a perpetual,
75
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
- (except as stated in this section) patent license to make, have made,
77
- use, offer to sell, sell, import, and otherwise transfer the Work,
78
- where such license applies only to those patent claims licensable
79
- by such Contributor that are necessarily infringed by their
80
- Contribution(s) alone or by combination of their Contribution(s)
81
- with the Work to which such Contribution(s) was submitted. If You
82
- institute patent litigation against any entity (including a
83
- cross-claim or counterclaim in a lawsuit) alleging that the Work
84
- or a Contribution incorporated within the Work constitutes direct
85
- or contributory patent infringement, then any patent licenses
86
- granted to You under this License for that Work shall terminate
87
- as of the date such litigation is filed.
88
-
89
- 4. Redistribution. You may reproduce and distribute copies of the
90
- Work or Derivative Works thereof in any medium, with or without
91
- modifications, and in Source or Object form, provided that You
92
- meet the following conditions:
93
-
94
- (a) You must give any other recipients of the Work or
95
- Derivative Works a copy of this License; and
96
-
97
- (b) You must cause any modified files to carry prominent notices
98
- stating that You changed the files; and
99
-
100
- (c) You must retain, in the Source form of any Derivative Works
101
- that You distribute, all copyright, patent, trademark, and
102
- attribution notices from the Source form of the Work,
103
- excluding those notices that do not pertain to any part of
104
- the Derivative Works; and
105
-
106
- (d) If the Work includes a "NOTICE" text file as part of its
107
- distribution, then any Derivative Works that You distribute must
108
- include a readable copy of the attribution notices contained
109
- within such NOTICE file, excluding those notices that do not
110
- pertain to any part of the Derivative Works, in at least one
111
- of the following places: within a NOTICE text file distributed
112
- as part of the Derivative Works; within the Source form or
113
- documentation, if provided along with the Derivative Works; or,
114
- within a display generated by the Derivative Works, if and
115
- wherever such third-party notices normally appear. The contents
116
- of the NOTICE file are for informational purposes only and
117
- do not modify the License. You may add Your own attribution
118
- notices within Derivative Works that You distribute, alongside
119
- or as an addendum to the NOTICE text from the Work, provided
120
- that such additional attribution notices cannot be construed
121
- as modifying the License.
122
-
123
- You may add Your own copyright statement to Your modifications and
124
- may provide additional or different license terms and conditions
125
- for use, reproduction, or distribution of Your modifications, or
126
- for any such Derivative Works as a whole, provided Your use,
127
- reproduction, and distribution of the Work otherwise complies with
128
- the conditions stated in this License.
129
-
130
- 5. Submission of Contributions. Unless You explicitly state otherwise,
131
- any Contribution intentionally submitted for inclusion in the Work
132
- by You to the Licensor shall be under the terms and conditions of
133
- this License, without any additional terms or conditions.
134
- Notwithstanding the above, nothing herein shall supersede or modify
135
- the terms of any separate license agreement you may have executed
136
- with Licensor regarding such Contributions.
137
-
138
- 6. Trademarks. This License does not grant permission to use the trade
139
- names, trademarks, service marks, or product names of the Licensor,
140
- except as required for reasonable and customary use in describing the
141
- origin of the Work and reproducing the content of the NOTICE file.
142
-
143
- 7. Disclaimer of Warranty. Unless required by applicable law or
144
- agreed to in writing, Licensor provides the Work (and each
145
- Contributor provides its Contributions) on an "AS IS" BASIS,
146
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
- implied, including, without limitation, any warranties or conditions
148
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
- PARTICULAR PURPOSE. You are solely responsible for determining the
150
- appropriateness of using or redistributing the Work and assume any
151
- risks associated with Your exercise of permissions under this License.
152
-
153
- 8. Limitation of Liability. In no event and under no legal theory,
154
- whether in tort (including negligence), contract, or otherwise,
155
- unless required by applicable law (such as deliberate and grossly
156
- negligent acts) or agreed to in writing, shall any Contributor be
157
- liable to You for damages, including any direct, indirect, special,
158
- incidental, or consequential damages of any character arising as a
159
- result of this License or out of the use or inability to use the
160
- Work (including but not limited to damages for loss of goodwill,
161
- work stoppage, computer failure or malfunction, or any and all
162
- other commercial damages or losses), even if such Contributor
163
- has been advised of the possibility of such damages.
164
-
165
- 9. Accepting Warranty or Additional Liability. While redistributing
166
- the Work or Derivative Works thereof, You may choose to offer,
167
- and charge a fee for, acceptance of support, warranty, indemnity,
168
- or other liability obligations and/or rights consistent with this
169
- License. However, in accepting such obligations, You may act only
170
- on Your own behalf and on Your sole responsibility, not on behalf
171
- of any other Contributor, and only if You agree to indemnify,
172
- defend, and hold each Contributor harmless for any liability
173
- incurred by, or claims asserted against, such Contributor by reason
174
- of your accepting any such warranty or additional liability.
175
-
176
- END OF TERMS AND CONDITIONS
177
-
178
- APPENDIX: How to apply the Apache License to your work.
179
-
180
- To apply the Apache License to your work, attach the following
181
- boilerplate notice, with the fields enclosed by brackets "[]"
182
- replaced with your own identifying information. (Don't include
183
- the brackets!) The text should be enclosed in the appropriate
184
- comment syntax for the file format. We also recommend that a
185
- file or class name and description of purpose be included on the
186
- same "printed page" as the copyright notice for easier
187
- identification within third-party archives.
188
-
189
- Copyright [yyyy] [name of copyright owner]
190
-
191
- Licensed under the Apache License, Version 2.0 (the "License");
192
- you may not use this file except in compliance with the License.
193
- You may obtain a copy of the License at
194
-
195
- http://www.apache.org/licenses/LICENSE-2.0
196
-
197
- Unless required by applicable law or agreed to in writing, software
198
- distributed under the License is distributed on an "AS IS" BASIS,
199
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
- See the License for the specific language governing permissions and
201
- limitations under the License.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
oasst-sft-4-pythia-12b-epoch-3.5/README.md DELETED
@@ -1,139 +0,0 @@
1
- ---
2
- license: apache-2.0
3
- language:
4
- - en
5
- tags:
6
- - sft
7
- pipeline_tag: text-generation
8
- widget:
9
- - text: <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
10
- - text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|>
11
- - text: <|prompter|>Write a story about future of AI development<|endoftext|><|assistant|>
12
- ---
13
-
14
- # Open-Assistant SFT-4 12B Model
15
-
16
-
17
- This is the 4th iteration English supervised-fine-tuning (SFT) model of
18
- the [Open-Assistant](https://github.com/LAION-AI/Open-Assistant) project.
19
- It is based on a Pythia 12B that was fine-tuned on human demonstrations
20
- of assistant conversations collected through the
21
- [https://open-assistant.io/](https://open-assistant.io/) human feedback web
22
- app before March 25, 2023.
23
-
24
- ## Model Details
25
-
26
- - **Developed by:** [Open-Assistant Contributors](https://open-assistant.io/)
27
- - **Model type:** Transformer-based Language Model
28
- - **Language:** English
29
- - **Finetuned from:** [EleutherAI / pythia-12b-deduped](https://huggingface.co/EleutherAI/pythia-12b-deduped)
30
- - **Code:** [Open-Assistant/model/model_training](https://github.com/LAION-AI/Open-Assistant/tree/main/model/model_training)
31
- - **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-04-03_andreaskoepf_oasst-sft-4-pythia-12b-epoch-3_5_sampling_noprefix_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json)
32
- - **License:** Apache 2.0
33
- - **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
34
-
35
- ## Prompting
36
-
37
- Two special tokens are used to mark the beginning of user and assistant turns:
38
- `<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
39
-
40
- Input prompt example:
41
- ```
42
- <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
43
- ```
44
- The input ends with the `<|assistant|>` token to signal that the model should
45
- start generating the assistant reply.
46
-
47
-
48
- ## Dev Details
49
-
50
- - wandb: https://wandb.ai/open-assistant/supervised-finetuning/runs/770a0t41
51
- - base model: [andreaskoepf/pythia-12b-pre-2000](https://huggingface.co/andreaskoepf/pythia-12b-pre-2000)
52
- - checkpoint: 4000 steps
53
-
54
- command: `deepspeed trainer_sft.py --configs defaults reference-data reference-pythia-12b --cache_dir /home/ubuntu/data_cache --output_dir .saved/oasst-sft-3-pythia-12b-reference_2kpre --num_train_epochs 8 --residual_dropout 0.2 --deepspeed --use_flash_attention true --model_name andreaskoepf/pythia-12b-pre-2000`
55
-
56
- data:
57
- ```
58
- reference-data:
59
- datasets:
60
- - oasst_export:
61
- lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
62
- input_file_path: 2023-03-25_oasst_research_ready_synth_labels.jsonl.gz
63
- val_split: 0.05
64
- - alpaca
65
- sort_by_length: false
66
- use_custom_sampler: false
67
- ```
68
-
69
-
70
- pythia:
71
- ```
72
- reference-pythia-12b:
73
- dtype: fp16
74
- log_dir: "pythia_log_12b"
75
- learning_rate: 6e-6
76
- model_name: EleutherAI/pythia-12b-deduped
77
- output_dir: pythia_model_12b
78
- weight_decay: 0.0
79
- max_length: 2048
80
- warmup_steps: 100
81
- gradient_checkpointing: true
82
- gradient_accumulation_steps: 2
83
- per_device_train_batch_size: 4
84
- per_device_eval_batch_size: 4
85
- eval_steps: 100
86
- save_steps: 1000
87
- num_train_epochs: 8
88
- save_total_limit: 4
89
- ```
90
-
91
- zero config:
92
- ```
93
- {
94
- "fp16": {
95
- "enabled": "auto",
96
- "loss_scale": 0,
97
- "loss_scale_window": 1000,
98
- "initial_scale_power": 16,
99
- "hysteresis": 2,
100
- "min_loss_scale": 1
101
- },
102
- "bf16": {
103
- "enabled": "auto"
104
- },
105
- "optimizer": {
106
- "type": "AdamW",
107
- "params": {
108
- "lr": "auto",
109
- "betas": "auto",
110
- "eps": "auto",
111
- "weight_decay": "auto"
112
- }
113
- },
114
- "scheduler": {
115
- "type": "WarmupDecayLR",
116
- "params": {
117
- "warmup_min_lr": "auto",
118
- "warmup_max_lr": "auto",
119
- "warmup_num_steps": "auto",
120
- "total_num_steps": "auto"
121
- }
122
- },
123
- "zero_optimization": {
124
- "stage": 2,
125
- "allgather_partitions": true,
126
- "allgather_bucket_size": 1e9,
127
- "overlap_comm": false,
128
- "reduce_scatter": true,
129
- "reduce_bucket_size": 1e9,
130
- "contiguous_gradients": true
131
- },
132
- "gradient_accumulation_steps": "auto",
133
- "gradient_clipping": "auto",
134
- "steps_per_print": 2000,
135
- "train_batch_size": "auto",
136
- "train_micro_batch_size_per_gpu": "auto",
137
- "wall_clock_breakdown": false
138
- }
139
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
oasst-sft-4-pythia-12b-epoch-3.5/config.json DELETED
@@ -1,25 +0,0 @@
1
- {
2
- "_name_or_path": ".saved/oasst-sft-3-pythia-12b-reference_2kpre/checkpoint-4000/",
3
- "architectures": [
4
- "GPTNeoXForCausalLM"
5
- ],
6
- "bos_token_id": 0,
7
- "eos_token_id": 0,
8
- "hidden_act": "gelu",
9
- "hidden_size": 5120,
10
- "initializer_range": 0.02,
11
- "intermediate_size": 20480,
12
- "layer_norm_eps": 1e-05,
13
- "max_position_embeddings": 2048,
14
- "model_type": "gpt_neox",
15
- "num_attention_heads": 40,
16
- "num_hidden_layers": 36,
17
- "rotary_emb_base": 10000,
18
- "rotary_pct": 0.25,
19
- "tie_word_embeddings": false,
20
- "torch_dtype": "float16",
21
- "transformers_version": "4.28.0.dev0",
22
- "use_cache": true,
23
- "use_parallel_residual": true,
24
- "vocab_size": 50288
25
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
oasst-sft-4-pythia-12b-epoch-3.5/generation_config.json DELETED
@@ -1,6 +0,0 @@
1
- {
2
- "_from_model_config": true,
3
- "bos_token_id": 0,
4
- "eos_token_id": 0,
5
- "transformers_version": "4.28.0.dev0"
6
- }
 
 
 
 
 
 
 
oasst-sft-4-pythia-12b-epoch-3.5/pytorch_model-00001-of-00003.bin DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6b2fc2bbe2a54eeb6fbff1e532054d1480fd880e6551af01db5f1c35780f9acf
3
- size 136
 
 
 
 
oasst-sft-4-pythia-12b-epoch-3.5/pytorch_model-00002-of-00003.bin DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8903ffdbe7e8491235d25bfb08e9f9e37b5b84d6d392fa9d57d7cf566cf2a3b7
3
- size 135
 
 
 
 
oasst-sft-4-pythia-12b-epoch-3.5/pytorch_model-00003-of-00003.bin DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:3f4a7a4f17361061a2cfcbf4920987c0d84703cf6bf945a2b3ab647b88aa6e3f
3
- size 135
 
 
 
 
oasst-sft-4-pythia-12b-epoch-3.5/pytorch_model.bin.index.json DELETED
@@ -1,551 +0,0 @@
1
- {
2
- "metadata": {
3
- "total_size": 23702829384.0
4
- },
5
- "weight_map": {
6
- "embed_out.weight": "pytorch_model-00003-of-00003.bin",
7
- "gpt_neox.embed_in.weight": "pytorch_model-00001-of-00003.bin",
8
- "gpt_neox.final_layer_norm.bias": "pytorch_model-00003-of-00003.bin",
9
- "gpt_neox.final_layer_norm.weight": "pytorch_model-00003-of-00003.bin",
10
- "gpt_neox.layers.0.attention.bias": "pytorch_model-00001-of-00003.bin",
11
- "gpt_neox.layers.0.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
12
- "gpt_neox.layers.0.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
13
- "gpt_neox.layers.0.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
14
- "gpt_neox.layers.0.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
15
- "gpt_neox.layers.0.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
16
- "gpt_neox.layers.0.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
17
- "gpt_neox.layers.0.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
18
- "gpt_neox.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
19
- "gpt_neox.layers.0.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
20
- "gpt_neox.layers.0.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
21
- "gpt_neox.layers.0.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
22
- "gpt_neox.layers.0.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
23
- "gpt_neox.layers.0.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
24
- "gpt_neox.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
25
- "gpt_neox.layers.1.attention.bias": "pytorch_model-00001-of-00003.bin",
26
- "gpt_neox.layers.1.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
27
- "gpt_neox.layers.1.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
28
- "gpt_neox.layers.1.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
29
- "gpt_neox.layers.1.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
30
- "gpt_neox.layers.1.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
31
- "gpt_neox.layers.1.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
32
- "gpt_neox.layers.1.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
33
- "gpt_neox.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
34
- "gpt_neox.layers.1.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
35
- "gpt_neox.layers.1.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
36
- "gpt_neox.layers.1.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
37
- "gpt_neox.layers.1.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
38
- "gpt_neox.layers.1.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
39
- "gpt_neox.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
40
- "gpt_neox.layers.10.attention.bias": "pytorch_model-00001-of-00003.bin",
41
- "gpt_neox.layers.10.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
42
- "gpt_neox.layers.10.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
43
- "gpt_neox.layers.10.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
44
- "gpt_neox.layers.10.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
45
- "gpt_neox.layers.10.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
46
- "gpt_neox.layers.10.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
47
- "gpt_neox.layers.10.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
48
- "gpt_neox.layers.10.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
49
- "gpt_neox.layers.10.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
50
- "gpt_neox.layers.10.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
51
- "gpt_neox.layers.10.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
52
- "gpt_neox.layers.10.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
53
- "gpt_neox.layers.10.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
54
- "gpt_neox.layers.10.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
55
- "gpt_neox.layers.11.attention.bias": "pytorch_model-00001-of-00003.bin",
56
- "gpt_neox.layers.11.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
57
- "gpt_neox.layers.11.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
58
- "gpt_neox.layers.11.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
59
- "gpt_neox.layers.11.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
60
- "gpt_neox.layers.11.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
61
- "gpt_neox.layers.11.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
62
- "gpt_neox.layers.11.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
63
- "gpt_neox.layers.11.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
64
- "gpt_neox.layers.11.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
65
- "gpt_neox.layers.11.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
66
- "gpt_neox.layers.11.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
67
- "gpt_neox.layers.11.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
68
- "gpt_neox.layers.11.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
69
- "gpt_neox.layers.11.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
70
- "gpt_neox.layers.12.attention.bias": "pytorch_model-00001-of-00003.bin",
71
- "gpt_neox.layers.12.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
72
- "gpt_neox.layers.12.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
73
- "gpt_neox.layers.12.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
74
- "gpt_neox.layers.12.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
75
- "gpt_neox.layers.12.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
76
- "gpt_neox.layers.12.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
77
- "gpt_neox.layers.12.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
78
- "gpt_neox.layers.12.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
79
- "gpt_neox.layers.12.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
80
- "gpt_neox.layers.12.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
81
- "gpt_neox.layers.12.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
82
- "gpt_neox.layers.12.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
83
- "gpt_neox.layers.12.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
84
- "gpt_neox.layers.12.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
85
- "gpt_neox.layers.13.attention.bias": "pytorch_model-00001-of-00003.bin",
86
- "gpt_neox.layers.13.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
87
- "gpt_neox.layers.13.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
88
- "gpt_neox.layers.13.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
89
- "gpt_neox.layers.13.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
90
- "gpt_neox.layers.13.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
91
- "gpt_neox.layers.13.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
92
- "gpt_neox.layers.13.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
93
- "gpt_neox.layers.13.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
94
- "gpt_neox.layers.13.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
95
- "gpt_neox.layers.13.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
96
- "gpt_neox.layers.13.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
97
- "gpt_neox.layers.13.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
98
- "gpt_neox.layers.13.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
99
- "gpt_neox.layers.13.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
100
- "gpt_neox.layers.14.attention.bias": "pytorch_model-00001-of-00003.bin",
101
- "gpt_neox.layers.14.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
102
- "gpt_neox.layers.14.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
103
- "gpt_neox.layers.14.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
104
- "gpt_neox.layers.14.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
105
- "gpt_neox.layers.14.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
106
- "gpt_neox.layers.14.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
107
- "gpt_neox.layers.14.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
108
- "gpt_neox.layers.14.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
109
- "gpt_neox.layers.14.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
110
- "gpt_neox.layers.14.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
111
- "gpt_neox.layers.14.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
112
- "gpt_neox.layers.14.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
113
- "gpt_neox.layers.14.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
114
- "gpt_neox.layers.14.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
115
- "gpt_neox.layers.15.attention.bias": "pytorch_model-00001-of-00003.bin",
116
- "gpt_neox.layers.15.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
117
- "gpt_neox.layers.15.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
118
- "gpt_neox.layers.15.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
119
- "gpt_neox.layers.15.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
120
- "gpt_neox.layers.15.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
121
- "gpt_neox.layers.15.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
122
- "gpt_neox.layers.15.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
123
- "gpt_neox.layers.15.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
124
- "gpt_neox.layers.15.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
125
- "gpt_neox.layers.15.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
126
- "gpt_neox.layers.15.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
127
- "gpt_neox.layers.15.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
128
- "gpt_neox.layers.15.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
129
- "gpt_neox.layers.15.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
130
- "gpt_neox.layers.16.attention.bias": "pytorch_model-00002-of-00003.bin",
131
- "gpt_neox.layers.16.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
132
- "gpt_neox.layers.16.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
133
- "gpt_neox.layers.16.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
134
- "gpt_neox.layers.16.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
135
- "gpt_neox.layers.16.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
136
- "gpt_neox.layers.16.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
137
- "gpt_neox.layers.16.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
138
- "gpt_neox.layers.16.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
139
- "gpt_neox.layers.16.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
140
- "gpt_neox.layers.16.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
141
- "gpt_neox.layers.16.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
142
- "gpt_neox.layers.16.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
143
- "gpt_neox.layers.16.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
144
- "gpt_neox.layers.16.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
145
- "gpt_neox.layers.17.attention.bias": "pytorch_model-00002-of-00003.bin",
146
- "gpt_neox.layers.17.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
147
- "gpt_neox.layers.17.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
148
- "gpt_neox.layers.17.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
149
- "gpt_neox.layers.17.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
150
- "gpt_neox.layers.17.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
151
- "gpt_neox.layers.17.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
152
- "gpt_neox.layers.17.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
153
- "gpt_neox.layers.17.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
154
- "gpt_neox.layers.17.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
155
- "gpt_neox.layers.17.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
156
- "gpt_neox.layers.17.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
157
- "gpt_neox.layers.17.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
158
- "gpt_neox.layers.17.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
159
- "gpt_neox.layers.17.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
160
- "gpt_neox.layers.18.attention.bias": "pytorch_model-00002-of-00003.bin",
161
- "gpt_neox.layers.18.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
162
- "gpt_neox.layers.18.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
163
- "gpt_neox.layers.18.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
164
- "gpt_neox.layers.18.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
165
- "gpt_neox.layers.18.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
166
- "gpt_neox.layers.18.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
167
- "gpt_neox.layers.18.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
168
- "gpt_neox.layers.18.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
169
- "gpt_neox.layers.18.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
170
- "gpt_neox.layers.18.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
171
- "gpt_neox.layers.18.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
172
- "gpt_neox.layers.18.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
173
- "gpt_neox.layers.18.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
174
- "gpt_neox.layers.18.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
175
- "gpt_neox.layers.19.attention.bias": "pytorch_model-00002-of-00003.bin",
176
- "gpt_neox.layers.19.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
177
- "gpt_neox.layers.19.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
178
- "gpt_neox.layers.19.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
179
- "gpt_neox.layers.19.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
180
- "gpt_neox.layers.19.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
181
- "gpt_neox.layers.19.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
182
- "gpt_neox.layers.19.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
183
- "gpt_neox.layers.19.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
184
- "gpt_neox.layers.19.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
185
- "gpt_neox.layers.19.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
186
- "gpt_neox.layers.19.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
187
- "gpt_neox.layers.19.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
188
- "gpt_neox.layers.19.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
189
- "gpt_neox.layers.19.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
190
- "gpt_neox.layers.2.attention.bias": "pytorch_model-00001-of-00003.bin",
191
- "gpt_neox.layers.2.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
192
- "gpt_neox.layers.2.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
193
- "gpt_neox.layers.2.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
194
- "gpt_neox.layers.2.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
195
- "gpt_neox.layers.2.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
196
- "gpt_neox.layers.2.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
197
- "gpt_neox.layers.2.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
198
- "gpt_neox.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
199
- "gpt_neox.layers.2.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
200
- "gpt_neox.layers.2.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
201
- "gpt_neox.layers.2.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
202
- "gpt_neox.layers.2.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
203
- "gpt_neox.layers.2.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
204
- "gpt_neox.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
205
- "gpt_neox.layers.20.attention.bias": "pytorch_model-00002-of-00003.bin",
206
- "gpt_neox.layers.20.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
207
- "gpt_neox.layers.20.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
208
- "gpt_neox.layers.20.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
209
- "gpt_neox.layers.20.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
210
- "gpt_neox.layers.20.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
211
- "gpt_neox.layers.20.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
212
- "gpt_neox.layers.20.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
213
- "gpt_neox.layers.20.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
214
- "gpt_neox.layers.20.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
215
- "gpt_neox.layers.20.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
216
- "gpt_neox.layers.20.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
217
- "gpt_neox.layers.20.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
218
- "gpt_neox.layers.20.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
219
- "gpt_neox.layers.20.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
220
- "gpt_neox.layers.21.attention.bias": "pytorch_model-00002-of-00003.bin",
221
- "gpt_neox.layers.21.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
222
- "gpt_neox.layers.21.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
223
- "gpt_neox.layers.21.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
224
- "gpt_neox.layers.21.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
225
- "gpt_neox.layers.21.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
226
- "gpt_neox.layers.21.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
227
- "gpt_neox.layers.21.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
228
- "gpt_neox.layers.21.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
229
- "gpt_neox.layers.21.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
230
- "gpt_neox.layers.21.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
231
- "gpt_neox.layers.21.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
232
- "gpt_neox.layers.21.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
233
- "gpt_neox.layers.21.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
234
- "gpt_neox.layers.21.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
235
- "gpt_neox.layers.22.attention.bias": "pytorch_model-00002-of-00003.bin",
236
- "gpt_neox.layers.22.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
237
- "gpt_neox.layers.22.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
238
- "gpt_neox.layers.22.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
239
- "gpt_neox.layers.22.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
240
- "gpt_neox.layers.22.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
241
- "gpt_neox.layers.22.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
242
- "gpt_neox.layers.22.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
243
- "gpt_neox.layers.22.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
244
- "gpt_neox.layers.22.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
245
- "gpt_neox.layers.22.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
246
- "gpt_neox.layers.22.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
247
- "gpt_neox.layers.22.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
248
- "gpt_neox.layers.22.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
249
- "gpt_neox.layers.22.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
250
- "gpt_neox.layers.23.attention.bias": "pytorch_model-00002-of-00003.bin",
251
- "gpt_neox.layers.23.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
252
- "gpt_neox.layers.23.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
253
- "gpt_neox.layers.23.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
254
- "gpt_neox.layers.23.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
255
- "gpt_neox.layers.23.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
256
- "gpt_neox.layers.23.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
257
- "gpt_neox.layers.23.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
258
- "gpt_neox.layers.23.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
259
- "gpt_neox.layers.23.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
260
- "gpt_neox.layers.23.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
261
- "gpt_neox.layers.23.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
262
- "gpt_neox.layers.23.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
263
- "gpt_neox.layers.23.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
264
- "gpt_neox.layers.23.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
265
- "gpt_neox.layers.24.attention.bias": "pytorch_model-00002-of-00003.bin",
266
- "gpt_neox.layers.24.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
267
- "gpt_neox.layers.24.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
268
- "gpt_neox.layers.24.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
269
- "gpt_neox.layers.24.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
270
- "gpt_neox.layers.24.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
271
- "gpt_neox.layers.24.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
272
- "gpt_neox.layers.24.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
273
- "gpt_neox.layers.24.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
274
- "gpt_neox.layers.24.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
275
- "gpt_neox.layers.24.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
276
- "gpt_neox.layers.24.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
277
- "gpt_neox.layers.24.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
278
- "gpt_neox.layers.24.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
279
- "gpt_neox.layers.24.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
280
- "gpt_neox.layers.25.attention.bias": "pytorch_model-00002-of-00003.bin",
281
- "gpt_neox.layers.25.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
282
- "gpt_neox.layers.25.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
283
- "gpt_neox.layers.25.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
284
- "gpt_neox.layers.25.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
285
- "gpt_neox.layers.25.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
286
- "gpt_neox.layers.25.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
287
- "gpt_neox.layers.25.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
288
- "gpt_neox.layers.25.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
289
- "gpt_neox.layers.25.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
290
- "gpt_neox.layers.25.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
291
- "gpt_neox.layers.25.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
292
- "gpt_neox.layers.25.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
293
- "gpt_neox.layers.25.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
294
- "gpt_neox.layers.25.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
295
- "gpt_neox.layers.26.attention.bias": "pytorch_model-00002-of-00003.bin",
296
- "gpt_neox.layers.26.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
297
- "gpt_neox.layers.26.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
298
- "gpt_neox.layers.26.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
299
- "gpt_neox.layers.26.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
300
- "gpt_neox.layers.26.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
301
- "gpt_neox.layers.26.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
302
- "gpt_neox.layers.26.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
303
- "gpt_neox.layers.26.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
304
- "gpt_neox.layers.26.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
305
- "gpt_neox.layers.26.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
306
- "gpt_neox.layers.26.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
307
- "gpt_neox.layers.26.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
308
- "gpt_neox.layers.26.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
309
- "gpt_neox.layers.26.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
310
- "gpt_neox.layers.27.attention.bias": "pytorch_model-00002-of-00003.bin",
311
- "gpt_neox.layers.27.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
312
- "gpt_neox.layers.27.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
313
- "gpt_neox.layers.27.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
314
- "gpt_neox.layers.27.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
315
- "gpt_neox.layers.27.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
316
- "gpt_neox.layers.27.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
317
- "gpt_neox.layers.27.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
318
- "gpt_neox.layers.27.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
319
- "gpt_neox.layers.27.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
320
- "gpt_neox.layers.27.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
321
- "gpt_neox.layers.27.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
322
- "gpt_neox.layers.27.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
323
- "gpt_neox.layers.27.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
324
- "gpt_neox.layers.27.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
325
- "gpt_neox.layers.28.attention.bias": "pytorch_model-00002-of-00003.bin",
326
- "gpt_neox.layers.28.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
327
- "gpt_neox.layers.28.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
328
- "gpt_neox.layers.28.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
329
- "gpt_neox.layers.28.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
330
- "gpt_neox.layers.28.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
331
- "gpt_neox.layers.28.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
332
- "gpt_neox.layers.28.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
333
- "gpt_neox.layers.28.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
334
- "gpt_neox.layers.28.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
335
- "gpt_neox.layers.28.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
336
- "gpt_neox.layers.28.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
337
- "gpt_neox.layers.28.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
338
- "gpt_neox.layers.28.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
339
- "gpt_neox.layers.28.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
340
- "gpt_neox.layers.29.attention.bias": "pytorch_model-00002-of-00003.bin",
341
- "gpt_neox.layers.29.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
342
- "gpt_neox.layers.29.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
343
- "gpt_neox.layers.29.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
344
- "gpt_neox.layers.29.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
345
- "gpt_neox.layers.29.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
346
- "gpt_neox.layers.29.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
347
- "gpt_neox.layers.29.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
348
- "gpt_neox.layers.29.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
349
- "gpt_neox.layers.29.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00003.bin",
350
- "gpt_neox.layers.29.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00003.bin",
351
- "gpt_neox.layers.29.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
352
- "gpt_neox.layers.29.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
353
- "gpt_neox.layers.29.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
354
- "gpt_neox.layers.29.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
355
- "gpt_neox.layers.3.attention.bias": "pytorch_model-00001-of-00003.bin",
356
- "gpt_neox.layers.3.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
357
- "gpt_neox.layers.3.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
358
- "gpt_neox.layers.3.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
359
- "gpt_neox.layers.3.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
360
- "gpt_neox.layers.3.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
361
- "gpt_neox.layers.3.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
362
- "gpt_neox.layers.3.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
363
- "gpt_neox.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
364
- "gpt_neox.layers.3.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
365
- "gpt_neox.layers.3.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
366
- "gpt_neox.layers.3.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
367
- "gpt_neox.layers.3.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
368
- "gpt_neox.layers.3.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
369
- "gpt_neox.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
370
- "gpt_neox.layers.30.attention.bias": "pytorch_model-00002-of-00003.bin",
371
- "gpt_neox.layers.30.attention.dense.bias": "pytorch_model-00002-of-00003.bin",
372
- "gpt_neox.layers.30.attention.dense.weight": "pytorch_model-00002-of-00003.bin",
373
- "gpt_neox.layers.30.attention.masked_bias": "pytorch_model-00002-of-00003.bin",
374
- "gpt_neox.layers.30.attention.query_key_value.bias": "pytorch_model-00002-of-00003.bin",
375
- "gpt_neox.layers.30.attention.query_key_value.weight": "pytorch_model-00002-of-00003.bin",
376
- "gpt_neox.layers.30.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
377
- "gpt_neox.layers.30.input_layernorm.bias": "pytorch_model-00002-of-00003.bin",
378
- "gpt_neox.layers.30.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
379
- "gpt_neox.layers.30.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00003.bin",
380
- "gpt_neox.layers.30.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00003.bin",
381
- "gpt_neox.layers.30.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00003.bin",
382
- "gpt_neox.layers.30.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00003.bin",
383
- "gpt_neox.layers.30.post_attention_layernorm.bias": "pytorch_model-00002-of-00003.bin",
384
- "gpt_neox.layers.30.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
385
- "gpt_neox.layers.31.attention.bias": "pytorch_model-00003-of-00003.bin",
386
- "gpt_neox.layers.31.attention.dense.bias": "pytorch_model-00003-of-00003.bin",
387
- "gpt_neox.layers.31.attention.dense.weight": "pytorch_model-00003-of-00003.bin",
388
- "gpt_neox.layers.31.attention.masked_bias": "pytorch_model-00003-of-00003.bin",
389
- "gpt_neox.layers.31.attention.query_key_value.bias": "pytorch_model-00003-of-00003.bin",
390
- "gpt_neox.layers.31.attention.query_key_value.weight": "pytorch_model-00003-of-00003.bin",
391
- "gpt_neox.layers.31.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
392
- "gpt_neox.layers.31.input_layernorm.bias": "pytorch_model-00003-of-00003.bin",
393
- "gpt_neox.layers.31.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
394
- "gpt_neox.layers.31.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00003.bin",
395
- "gpt_neox.layers.31.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00003.bin",
396
- "gpt_neox.layers.31.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00003.bin",
397
- "gpt_neox.layers.31.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00003.bin",
398
- "gpt_neox.layers.31.post_attention_layernorm.bias": "pytorch_model-00003-of-00003.bin",
399
- "gpt_neox.layers.31.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
400
- "gpt_neox.layers.32.attention.bias": "pytorch_model-00003-of-00003.bin",
401
- "gpt_neox.layers.32.attention.dense.bias": "pytorch_model-00003-of-00003.bin",
402
- "gpt_neox.layers.32.attention.dense.weight": "pytorch_model-00003-of-00003.bin",
403
- "gpt_neox.layers.32.attention.masked_bias": "pytorch_model-00003-of-00003.bin",
404
- "gpt_neox.layers.32.attention.query_key_value.bias": "pytorch_model-00003-of-00003.bin",
405
- "gpt_neox.layers.32.attention.query_key_value.weight": "pytorch_model-00003-of-00003.bin",
406
- "gpt_neox.layers.32.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
407
- "gpt_neox.layers.32.input_layernorm.bias": "pytorch_model-00003-of-00003.bin",
408
- "gpt_neox.layers.32.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
409
- "gpt_neox.layers.32.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00003.bin",
410
- "gpt_neox.layers.32.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00003.bin",
411
- "gpt_neox.layers.32.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00003.bin",
412
- "gpt_neox.layers.32.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00003.bin",
413
- "gpt_neox.layers.32.post_attention_layernorm.bias": "pytorch_model-00003-of-00003.bin",
414
- "gpt_neox.layers.32.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
415
- "gpt_neox.layers.33.attention.bias": "pytorch_model-00003-of-00003.bin",
416
- "gpt_neox.layers.33.attention.dense.bias": "pytorch_model-00003-of-00003.bin",
417
- "gpt_neox.layers.33.attention.dense.weight": "pytorch_model-00003-of-00003.bin",
418
- "gpt_neox.layers.33.attention.masked_bias": "pytorch_model-00003-of-00003.bin",
419
- "gpt_neox.layers.33.attention.query_key_value.bias": "pytorch_model-00003-of-00003.bin",
420
- "gpt_neox.layers.33.attention.query_key_value.weight": "pytorch_model-00003-of-00003.bin",
421
- "gpt_neox.layers.33.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
422
- "gpt_neox.layers.33.input_layernorm.bias": "pytorch_model-00003-of-00003.bin",
423
- "gpt_neox.layers.33.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
424
- "gpt_neox.layers.33.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00003.bin",
425
- "gpt_neox.layers.33.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00003.bin",
426
- "gpt_neox.layers.33.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00003.bin",
427
- "gpt_neox.layers.33.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00003.bin",
428
- "gpt_neox.layers.33.post_attention_layernorm.bias": "pytorch_model-00003-of-00003.bin",
429
- "gpt_neox.layers.33.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
430
- "gpt_neox.layers.34.attention.bias": "pytorch_model-00003-of-00003.bin",
431
- "gpt_neox.layers.34.attention.dense.bias": "pytorch_model-00003-of-00003.bin",
432
- "gpt_neox.layers.34.attention.dense.weight": "pytorch_model-00003-of-00003.bin",
433
- "gpt_neox.layers.34.attention.masked_bias": "pytorch_model-00003-of-00003.bin",
434
- "gpt_neox.layers.34.attention.query_key_value.bias": "pytorch_model-00003-of-00003.bin",
435
- "gpt_neox.layers.34.attention.query_key_value.weight": "pytorch_model-00003-of-00003.bin",
436
- "gpt_neox.layers.34.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
437
- "gpt_neox.layers.34.input_layernorm.bias": "pytorch_model-00003-of-00003.bin",
438
- "gpt_neox.layers.34.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
439
- "gpt_neox.layers.34.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00003.bin",
440
- "gpt_neox.layers.34.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00003.bin",
441
- "gpt_neox.layers.34.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00003.bin",
442
- "gpt_neox.layers.34.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00003.bin",
443
- "gpt_neox.layers.34.post_attention_layernorm.bias": "pytorch_model-00003-of-00003.bin",
444
- "gpt_neox.layers.34.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
445
- "gpt_neox.layers.35.attention.bias": "pytorch_model-00003-of-00003.bin",
446
- "gpt_neox.layers.35.attention.dense.bias": "pytorch_model-00003-of-00003.bin",
447
- "gpt_neox.layers.35.attention.dense.weight": "pytorch_model-00003-of-00003.bin",
448
- "gpt_neox.layers.35.attention.masked_bias": "pytorch_model-00003-of-00003.bin",
449
- "gpt_neox.layers.35.attention.query_key_value.bias": "pytorch_model-00003-of-00003.bin",
450
- "gpt_neox.layers.35.attention.query_key_value.weight": "pytorch_model-00003-of-00003.bin",
451
- "gpt_neox.layers.35.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
452
- "gpt_neox.layers.35.input_layernorm.bias": "pytorch_model-00003-of-00003.bin",
453
- "gpt_neox.layers.35.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
454
- "gpt_neox.layers.35.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00003.bin",
455
- "gpt_neox.layers.35.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00003.bin",
456
- "gpt_neox.layers.35.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00003.bin",
457
- "gpt_neox.layers.35.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00003.bin",
458
- "gpt_neox.layers.35.post_attention_layernorm.bias": "pytorch_model-00003-of-00003.bin",
459
- "gpt_neox.layers.35.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
460
- "gpt_neox.layers.4.attention.bias": "pytorch_model-00001-of-00003.bin",
461
- "gpt_neox.layers.4.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
462
- "gpt_neox.layers.4.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
463
- "gpt_neox.layers.4.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
464
- "gpt_neox.layers.4.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
465
- "gpt_neox.layers.4.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
466
- "gpt_neox.layers.4.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
467
- "gpt_neox.layers.4.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
468
- "gpt_neox.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
469
- "gpt_neox.layers.4.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
470
- "gpt_neox.layers.4.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
471
- "gpt_neox.layers.4.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
472
- "gpt_neox.layers.4.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
473
- "gpt_neox.layers.4.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
474
- "gpt_neox.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
475
- "gpt_neox.layers.5.attention.bias": "pytorch_model-00001-of-00003.bin",
476
- "gpt_neox.layers.5.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
477
- "gpt_neox.layers.5.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
478
- "gpt_neox.layers.5.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
479
- "gpt_neox.layers.5.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
480
- "gpt_neox.layers.5.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
481
- "gpt_neox.layers.5.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
482
- "gpt_neox.layers.5.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
483
- "gpt_neox.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
484
- "gpt_neox.layers.5.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
485
- "gpt_neox.layers.5.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
486
- "gpt_neox.layers.5.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
487
- "gpt_neox.layers.5.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
488
- "gpt_neox.layers.5.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
489
- "gpt_neox.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
490
- "gpt_neox.layers.6.attention.bias": "pytorch_model-00001-of-00003.bin",
491
- "gpt_neox.layers.6.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
492
- "gpt_neox.layers.6.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
493
- "gpt_neox.layers.6.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
494
- "gpt_neox.layers.6.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
495
- "gpt_neox.layers.6.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
496
- "gpt_neox.layers.6.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
497
- "gpt_neox.layers.6.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
498
- "gpt_neox.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
499
- "gpt_neox.layers.6.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
500
- "gpt_neox.layers.6.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
501
- "gpt_neox.layers.6.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
502
- "gpt_neox.layers.6.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
503
- "gpt_neox.layers.6.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
504
- "gpt_neox.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
505
- "gpt_neox.layers.7.attention.bias": "pytorch_model-00001-of-00003.bin",
506
- "gpt_neox.layers.7.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
507
- "gpt_neox.layers.7.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
508
- "gpt_neox.layers.7.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
509
- "gpt_neox.layers.7.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
510
- "gpt_neox.layers.7.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
511
- "gpt_neox.layers.7.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
512
- "gpt_neox.layers.7.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
513
- "gpt_neox.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
514
- "gpt_neox.layers.7.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
515
- "gpt_neox.layers.7.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
516
- "gpt_neox.layers.7.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
517
- "gpt_neox.layers.7.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
518
- "gpt_neox.layers.7.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
519
- "gpt_neox.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
520
- "gpt_neox.layers.8.attention.bias": "pytorch_model-00001-of-00003.bin",
521
- "gpt_neox.layers.8.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
522
- "gpt_neox.layers.8.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
523
- "gpt_neox.layers.8.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
524
- "gpt_neox.layers.8.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
525
- "gpt_neox.layers.8.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
526
- "gpt_neox.layers.8.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
527
- "gpt_neox.layers.8.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
528
- "gpt_neox.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
529
- "gpt_neox.layers.8.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
530
- "gpt_neox.layers.8.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
531
- "gpt_neox.layers.8.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
532
- "gpt_neox.layers.8.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
533
- "gpt_neox.layers.8.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
534
- "gpt_neox.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
535
- "gpt_neox.layers.9.attention.bias": "pytorch_model-00001-of-00003.bin",
536
- "gpt_neox.layers.9.attention.dense.bias": "pytorch_model-00001-of-00003.bin",
537
- "gpt_neox.layers.9.attention.dense.weight": "pytorch_model-00001-of-00003.bin",
538
- "gpt_neox.layers.9.attention.masked_bias": "pytorch_model-00001-of-00003.bin",
539
- "gpt_neox.layers.9.attention.query_key_value.bias": "pytorch_model-00001-of-00003.bin",
540
- "gpt_neox.layers.9.attention.query_key_value.weight": "pytorch_model-00001-of-00003.bin",
541
- "gpt_neox.layers.9.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
542
- "gpt_neox.layers.9.input_layernorm.bias": "pytorch_model-00001-of-00003.bin",
543
- "gpt_neox.layers.9.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
544
- "gpt_neox.layers.9.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00003.bin",
545
- "gpt_neox.layers.9.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00003.bin",
546
- "gpt_neox.layers.9.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00003.bin",
547
- "gpt_neox.layers.9.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00003.bin",
548
- "gpt_neox.layers.9.post_attention_layernorm.bias": "pytorch_model-00001-of-00003.bin",
549
- "gpt_neox.layers.9.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin"
550
- }
551
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
oasst-sft-4-pythia-12b-epoch-3.5/special_tokens_map.json DELETED
@@ -1,14 +0,0 @@
1
- {
2
- "additional_special_tokens": [
3
- "<|system|>",
4
- "<|assistant|>",
5
- "<|prefix_begin|>",
6
- "<|prefix_end|>",
7
- "<|prompter|>"
8
- ],
9
- "bos_token": "<|endoftext|>",
10
- "eos_token": "<|endoftext|>",
11
- "pad_token": "<|padding|>",
12
- "sep_token": "<|endoftext|>",
13
- "unk_token": "<|endoftext|>"
14
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
oasst-sft-4-pythia-12b-epoch-3.5/tokenizer.json DELETED
The diff for this file is too large to render. See raw diff
 
oasst-sft-4-pythia-12b-epoch-3.5/tokenizer_config.json DELETED
@@ -1,10 +0,0 @@
1
- {
2
- "add_prefix_space": false,
3
- "bos_token": "<|endoftext|>",
4
- "clean_up_tokenization_spaces": true,
5
- "eos_token": "<|endoftext|>",
6
- "model_max_length": 1000000000000000019884624838656,
7
- "special_tokens_map_file": "/fsx/home-hailey/.cache/huggingface/hub/models--EleutherAI--gpt-neox-20b/snapshots/3523781c8df75f7741687a4284f6f70e1afa12f4/special_tokens_map.json",
8
- "tokenizer_class": "GPTNeoXTokenizer",
9
- "unk_token": "<|endoftext|>"
10
- }