bababababooey commited on
Commit
d0cc3ac
1 Parent(s): 1c4a1c7

Upload 17 files

Browse files
LICENSE.txt ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ LLAMA 3.2 COMMUNITY LICENSE AGREEMENT
2
+ Llama 3.2 Version Release Date: September 25, 2024
3
+
4
+ “Agreement” means the terms and conditions for use, reproduction, distribution
5
+ and modification of the Llama Materials set forth herein.
6
+
7
+ “Documentation” means the specifications, manuals and documentation accompanying Llama 3.2
8
+ distributed by Meta at https://llama.meta.com/doc/overview.
9
+
10
+ “Licensee” or “you” means you, or your employer or any other person or entity (if you are
11
+ entering into this Agreement on such person or entity’s behalf), of the age required under
12
+ applicable laws, rules or regulations to provide legal consent and that has legal authority
13
+ to bind your employer or such other person or entity if you are entering in this Agreement
14
+ on their behalf.
15
+
16
+ “Llama 3.2” means the foundational large language models and software and algorithms, including
17
+ machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
18
+ fine-tuning enabling code and other elements of the foregoing distributed by Meta at
19
+ https://www.llama.com/llama-downloads.
20
+
21
+ “Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and
22
+ any portion thereof) made available under this Agreement.
23
+
24
+ “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or,
25
+ if you are an entity, your principal place of business is in the EEA or Switzerland)
26
+ and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
27
+
28
+
29
+ By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials,
30
+ you agree to be bound by this Agreement.
31
+
32
+
33
+ 1. License Rights and Redistribution.
34
+
35
+ a. Grant of Rights. You are granted a non-exclusive, worldwide,
36
+ non-transferable and royalty-free limited license under Meta’s intellectual property or other rights
37
+ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works
38
+ of, and make modifications to the Llama Materials.
39
+
40
+ b. Redistribution and Use.
41
+
42
+ i. If you distribute or make available the Llama Materials (or any derivative works thereof),
43
+ or a product or service (including another AI model) that contains any of them, you shall (A) provide
44
+ a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama”
45
+ on a related website, user interface, blogpost, about page, or product documentation. If you use the
46
+ Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or
47
+ otherwise improve an AI model, which is distributed or made available, you shall also include “Llama”
48
+ at the beginning of any such AI model name.
49
+
50
+ ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
51
+ of an integrated end user product, then Section 2 of this Agreement will not apply to you.
52
+
53
+ iii. You must retain in all copies of the Llama Materials that you distribute the
54
+ following attribution notice within a “Notice” text file distributed as a part of such copies:
55
+ “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,
56
+ Inc. All Rights Reserved.”
57
+
58
+ iv. Your use of the Llama Materials must comply with applicable laws and regulations
59
+ (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for
60
+ the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby
61
+ incorporated by reference into this Agreement.
62
+
63
+ 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users
64
+ of the products or services made available by or for Licensee, or Licensee’s affiliates,
65
+ is greater than 700 million monthly active users in the preceding calendar month, you must request
66
+ a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to
67
+ exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
68
+
69
+ 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND
70
+ RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS
71
+ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES
72
+ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE
73
+ FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED
74
+ WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
75
+
76
+ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,
77
+ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
78
+ FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN
79
+ IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
80
+
81
+ 5. Intellectual Property.
82
+
83
+ a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials,
84
+ neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates,
85
+ except as required for reasonable and customary use in describing and redistributing the Llama Materials or as
86
+ set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required
87
+ to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible
88
+ at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark
89
+ will inure to the benefit of Meta.
90
+
91
+ b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any
92
+ derivative works and modifications of the Llama Materials that are made by you, as between you and Meta,
93
+ you are and will be the owner of such derivative works and modifications.
94
+
95
+ c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or
96
+ counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion
97
+ of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable
98
+ by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or
99
+ claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third
100
+ party arising out of or related to your use or distribution of the Llama Materials.
101
+
102
+ 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access
103
+ to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms
104
+ and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this
105
+ Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
106
+ 4 and 7 shall survive the termination of this Agreement.
107
+
108
+ 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of
109
+ California without regard to choice of law principles, and the UN Convention on Contracts for the International
110
+ Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of
111
+ any dispute arising out of this Agreement.
README.md CHANGED
@@ -1,3 +1,490 @@
1
- ---
2
- license: llama3.2
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - de
5
+ - fr
6
+ - it
7
+ - pt
8
+ - hi
9
+ - es
10
+ - th
11
+ library_name: transformers
12
+ pipeline_tag: image-text-to-text
13
+ tags:
14
+ - facebook
15
+ - meta
16
+ - pytorch
17
+ - llama
18
+ - llama-3
19
+ widget:
20
+ - example_title: rococo art
21
+ messages:
22
+ - role: user
23
+ content:
24
+ - type: text
25
+ text: Which era does this piece belong to? Give details about the era.
26
+ - type: image_url
27
+ image_url:
28
+ url: https://huggingface.co/datasets/huggingface/release-assets/resolve/main/rococo.jpg
29
+ - role: assistant
30
+ content: |
31
+ The piece is a painting of a woman in a red dress, surrounded by gold and white ornate details. The woman is depicted in mid-air, with her arms outstretched and her legs bent at the knees. She is holding a bouquet of flowers in her right hand and a fruit in her left hand.
32
+
33
+ The background of the painting is a light blue sky with pink clouds, and there are also some pink flowers and green leaves surrounding the woman. The overall atmosphere of the painting is one of joy and celebration, as if the woman is dancing or celebrating something.
34
+
35
+ This piece belongs to the Rococo era, which was a style of art and architecture that emerged in Europe in the 18th century. The Rococo style is characterized by its use of pastel colors, curved lines, and ornate details. It was popularized during the reign of King Louis XV of France, who ruled from 1715 to 1774.
36
+ - example_title: diagram understanding
37
+ messages:
38
+ - role: user
39
+ content:
40
+ - type: text
41
+ text: Where do the droughts happen according to this diagram?
42
+ - type: image_url
43
+ image_url:
44
+ url: https://huggingface.co/datasets/huggingface/release-assets/resolve/main/weather_events.png
45
+ - role: assistant
46
+ content: |
47
+ # Drought Regions in 2016
48
+
49
+ According to the diagram, the regions that experienced severe droughts in 2016 are:
50
+
51
+ - **Eastern and Southern Africa**: This region was severely affected by droughts, with significant impacts on agriculture and water resources.
52
+ - **Western and Southern Africa**: Droughts also occurred in this region, leading to food insecurity and other challenges.
53
+
54
+ These regions were particularly vulnerable to droughts in 2016, highlighting the need for effective water management strategies and climate resilience measures.
55
+ - example_title: document qa
56
+ messages:
57
+ - role: user
58
+ content:
59
+ - type: text
60
+ text: How long does it take from invoice date to due date? Be short and concise.
61
+ - type: image_url
62
+ image_url:
63
+ url: https://huggingface.co/datasets/huggingface/release-assets/resolve/main/invoice.png
64
+ - role: assistant
65
+ content: |
66
+ To determine the time gap between the invoice date and the due date, we need to calculate the difference between these two dates.
67
+
68
+ Invoice Date: February 11, 2019
69
+ Due Date: February 26, 2019
70
+
71
+ Step 1: Calculate the Difference in Days
72
+
73
+ February 26 - February 11 = 15 days
74
+
75
+ Therefore, the time gap between the invoice date and the due date is 15 days.
76
+ license: llama3.2
77
+ extra_gated_prompt: >-
78
+ ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT
79
+
80
+
81
+ Llama 3.2 Version Release Date: September 25, 2024
82
+
83
+
84
+ “Agreement” means the terms and conditions for use, reproduction, distribution
85
+ and modification of the Llama Materials set forth herein.
86
+
87
+
88
+ “Documentation” means the specifications, manuals and documentation accompanying Llama 3.2
89
+ distributed by Meta at https://llama.meta.com/doc/overview.
90
+
91
+
92
+ “Licensee” or “you” means you, or your employer or any other person or entity (if you are
93
+ entering into this Agreement on such person or entity’s behalf), of the age required under
94
+ applicable laws, rules or regulations to provide legal consent and that has legal authority
95
+ to bind your employer or such other person or entity if you are entering in this Agreement
96
+ on their behalf.
97
+
98
+
99
+ “Llama 3.2” means the foundational large language models and software and algorithms, including
100
+ machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
101
+ fine-tuning enabling code and other elements of the foregoing distributed by Meta at
102
+ https://www.llama.com/llama-downloads.
103
+
104
+
105
+ “Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and
106
+ any portion thereof) made available under this Agreement.
107
+
108
+
109
+ “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or,
110
+ if you are an entity, your principal place of business is in the EEA or Switzerland)
111
+ and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
112
+
113
+
114
+ By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials,
115
+ you agree to be bound by this Agreement.
116
+
117
+
118
+ 1. License Rights and Redistribution.
119
+
120
+ a. Grant of Rights. You are granted a non-exclusive, worldwide,
121
+ non-transferable and royalty-free limited license under Meta’s intellectual property or other rights
122
+ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works
123
+ of, and make modifications to the Llama Materials.
124
+
125
+ b. Redistribution and Use.
126
+
127
+ i. If you distribute or make available the Llama Materials (or any derivative works thereof),
128
+ or a product or service (including another AI model) that contains any of them, you shall (A) provide
129
+ a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama”
130
+ on a related website, user interface, blogpost, about page, or product documentation. If you use the
131
+ Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or
132
+ otherwise improve an AI model, which is distributed or made available, you shall also include “Llama”
133
+ at the beginning of any such AI model name.
134
+
135
+ ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
136
+ of an integrated end user product, then Section 2 of this Agreement will not apply to you.
137
+
138
+ iii. You must retain in all copies of the Llama Materials that you distribute the
139
+ following attribution notice within a “Notice” text file distributed as a part of such copies:
140
+ “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,
141
+ Inc. All Rights Reserved.”
142
+
143
+ iv. Your use of the Llama Materials must comply with applicable laws and regulations
144
+ (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for
145
+ the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby
146
+ incorporated by reference into this Agreement.
147
+
148
+ 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users
149
+ of the products or services made available by or for Licensee, or Licensee’s affiliates,
150
+ is greater than 700 million monthly active users in the preceding calendar month, you must request
151
+ a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to
152
+ exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
153
+
154
+ 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND
155
+ RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS
156
+ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES
157
+ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE
158
+ FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED
159
+ WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
160
+
161
+ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,
162
+ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
163
+ FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN
164
+ IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
165
+
166
+ 5. Intellectual Property.
167
+
168
+ a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials,
169
+ neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates,
170
+ except as required for reasonable and customary use in describing and redistributing the Llama Materials or as
171
+ set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required
172
+ to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible
173
+ at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark
174
+ will inure to the benefit of Meta.
175
+
176
+ b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any
177
+ derivative works and modifications of the Llama Materials that are made by you, as between you and Meta,
178
+ you are and will be the owner of such derivative works and modifications.
179
+
180
+ c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or
181
+ counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion
182
+ of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable
183
+ by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or
184
+ claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third
185
+ party arising out of or related to your use or distribution of the Llama Materials.
186
+
187
+ 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access
188
+ to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms
189
+ and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this
190
+ Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
191
+ 4 and 7 shall survive the termination of this Agreement.
192
+
193
+ 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of
194
+ California without regard to choice of law principles, and the UN Convention on Contracts for the International
195
+ Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of
196
+ any dispute arising out of this Agreement.
197
+
198
+ ### Llama 3.2 Acceptable Use Policy
199
+
200
+ Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2.
201
+ If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”).
202
+ The most recent copy of this policy can be found at
203
+ [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).
204
+
205
+ #### Prohibited Uses
206
+
207
+ We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:
208
+
209
+ 1. Violate the law or others’ rights, including to:
210
+ 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
211
+ 1. Violence or terrorism
212
+ 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
213
+ 3. Human trafficking, exploitation, and sexual violence
214
+ 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
215
+ 5. Sexual solicitation
216
+ 6. Any other criminal activity
217
+ 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
218
+ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
219
+ 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
220
+ 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law
221
+ 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
222
+ 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
223
+ 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta 
224
+ 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:
225
+ 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997
226
+ 9. Guns and illegal weapons (including weapon development)
227
+ 10. Illegal drugs and regulated/controlled substances
228
+ 11. Operation of critical infrastructure, transportation technologies, or heavy machinery
229
+ 12. Self-harm or harm to others, including suicide, cutting, and eating disorders
230
+ 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
231
+ 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:
232
+ 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
233
+ 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
234
+ 16. Generating, promoting, or further distributing spam
235
+ 17. Impersonating another individual without consent, authorization, or legal right
236
+ 18. Representing that the use of Llama 3.2 or outputs are human-generated
237
+ 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 
238
+ 4. Fail to appropriately disclose to end users any known dangers of your AI system
239
+ 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2
240
+
241
+
242
+ With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.
243
+
244
+
245
+ Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:
246
+
247
+
248
+ * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)
249
+
250
+ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
251
+
252
+ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
253
+
254
+ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: LlamaUseReport@meta.com
255
+ extra_gated_fields:
256
+ First Name: text
257
+ Last Name: text
258
+ Date of birth: date_picker
259
+ Country: country
260
+ Affiliation: text
261
+ Job title:
262
+ type: select
263
+ options:
264
+ - Student
265
+ - Research Graduate
266
+ - AI researcher
267
+ - AI developer/engineer
268
+ - Reporter
269
+ - Other
270
+ geo: ip_location
271
+ By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
272
+ extra_gated_description: >-
273
+ The information you provide will be collected, stored, processed and shared in
274
+ accordance with the [Meta Privacy
275
+ Policy](https://www.facebook.com/privacy/policy/).
276
+ extra_gated_button_content: Submit
277
+ extra_gated_eu_disallowed: true
278
+ ---
279
+
280
+ ## Model Information
281
+
282
+ The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text \+ images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks.
283
+
284
+ **Model Developer**: Meta
285
+
286
+ **Model Architecture:** Llama 3.2-Vision is built on top of Llama 3.1 text-only model, which is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. To support image recognition tasks, the Llama 3.2-Vision model uses a separately trained vision adapter that integrates with the pre-trained Llama 3.1 language model. The adapter consists of a series of cross-attention layers that feed image encoder representations into the core LLM.
287
+
288
+ | | Training Data | Params | Input modalities | Output modalities | Context length | GQA | Data volume | Knowledge cutoff |
289
+ | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
290
+ | Llama 3.2-Vision | (Image, text) pairs | 11B (10.6) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 |
291
+ | Llama 3.2-Vision | (Image, text) pairs | 90B (88.8) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 |
292
+
293
+ **Supported Languages:** For text only tasks, English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Note for image+text applications, English is the only language supported.
294
+
295
+ Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
296
+
297
+ **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
298
+
299
+ **Model Release Date:** Sept 25, 2024
300
+
301
+ **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
302
+
303
+ **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
304
+
305
+ **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2-Vision in applications, please go [here](https://github.com/meta-llama/llama-recipes).
306
+
307
+ ## Intended Use
308
+
309
+ **Intended Use Cases:** Llama 3.2-Vision is intended for commercial and research use. Instruction tuned models are intended for visual recognition, image reasoning, captioning, and assistant-like chat with images, whereas pretrained models can be adapted for a variety of image reasoning tasks. Additionally, because of Llama 3.2-Vision’s ability to take images and text as inputs, additional use cases could include:
310
+
311
+ 1. Visual Question Answering (VQA) and Visual Reasoning: Imagine a machine that looks at a picture and understands your questions about it.
312
+ 2. Document Visual Question Answering (DocVQA): Imagine a computer understanding both the text and layout of a document, like a map or contract, and then answering questions about it directly from the image.
313
+ 3. Image Captioning: Image captioning bridges the gap between vision and language, extracting details, understanding the scene, and then crafting a sentence or two that tells the story.
314
+ 4. Image-Text Retrieval: Image-text retrieval is like a matchmaker for images and their descriptions. Similar to a search engine but one that understands both pictures and words.
315
+ 5. Visual Grounding: Visual grounding is like connecting the dots between what we see and say. It’s about understanding how language references specific parts of an image, allowing AI models to pinpoint objects or regions based on natural language descriptions.
316
+
317
+
318
+ The Llama 3.2 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.2 Community License allows for these use cases.
319
+
320
+ **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
321
+
322
+ ## How to use
323
+
324
+ This repository contains two versions of Llama-3.2-11B-Vision-Instruct, for use with transformers and with the original `llama` codebase.
325
+
326
+ ### Use with transformers
327
+
328
+ Starting with transformers >= 4.45.0 onward, you can run inference using conversational messages that may include an image you can query about.
329
+
330
+ Make sure to update your transformers installation via `pip install --upgrade transformers`.
331
+
332
+ ```python
333
+ import requests
334
+ import torch
335
+ from PIL import Image
336
+ from transformers import MllamaForConditionalGeneration, AutoProcessor
337
+
338
+ model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct"
339
+
340
+ model = MllamaForConditionalGeneration.from_pretrained(
341
+ model_id,
342
+ torch_dtype=torch.bfloat16,
343
+ device_map="auto",
344
+ )
345
+ processor = AutoProcessor.from_pretrained(model_id)
346
+
347
+ url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
348
+ image = Image.open(requests.get(url, stream=True).raw)
349
+
350
+ messages = [
351
+ {"role": "user", "content": [
352
+ {"type": "image"},
353
+ {"type": "text", "text": "If I had to write a haiku for this one, it would be: "}
354
+ ]}
355
+ ]
356
+ input_text = processor.apply_chat_template(messages, add_generation_prompt=True)
357
+ inputs = processor(image, input_text, return_tensors="pt").to(model.device)
358
+
359
+ output = model.generate(**inputs, max_new_tokens=30)
360
+ print(processor.decode(output[0]))
361
+ ```
362
+
363
+ ### Use with `llama`
364
+
365
+ Please, follow the instructions in the [repository](https://github.com/meta-llama/llama).
366
+
367
+ To download the original checkpoints, you can use `huggingface-cli` as follows:
368
+
369
+ ```
370
+ huggingface-cli download meta-llama/Llama-3.2-11B-Vision-Instruct --include "original/*" --local-dir Llama-3.2-11B-Vision-Instruct
371
+ ```
372
+
373
+ ## Hardware and Software
374
+
375
+ **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
376
+
377
+ **Training Energy Use:** Training utilized a cumulative of **2.02M** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
378
+
379
+ ##
380
+
381
+ **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **584** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
382
+
383
+ | | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
384
+ | :---- | :---: | :---: | :---: | :---: |
385
+ | Llama 3.2-vision 11B | Stage 1 pretraining: 147K H100 hours Stage 2 annealing: 98K H100 hours SFT: 896 H100 hours RLHF: 224 H100 hours | 700 | 71 | 0 |
386
+ | Llama 3.2-vision 90B | Stage 1 pretraining: 885K H100 hours Stage 2 annealing: 885K H100 hours SFT: 3072 H100 hours RLHF: 2048 H100 hours | 700 | 513 | 0 |
387
+ | Total | 2.02M | | 584 | 0 |
388
+
389
+ The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
390
+
391
+ ## Training Data
392
+
393
+ **Overview:** Llama 3.2-Vision was pretrained on 6B image and text pairs. The instruction tuning data includes publicly available vision instruction datasets, as well as over 3M synthetically generated examples.
394
+
395
+ **Data Freshness:** The pretraining data has a cutoff of December 2023\.
396
+
397
+ ## Benchmarks \- Image Reasoning
398
+
399
+ In this section, we report the results for Llama 3.2-Vision models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
400
+
401
+ ### Base Pretrained Models
402
+
403
+ | Category | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B |
404
+ | ----- | ----- | ----- | ----- | ----- | ----- |
405
+ | Image Understanding | VQAv2 (val) | 0 | Accuracy | 66.8 | 73.6 |
406
+ | | Text VQA (val) | 0 | Relaxed accuracy | 73.1 | 73.5 |
407
+ | | DocVQA (val, unseen) | 0 | ANLS | 62.3 | 70.7 |
408
+ | Visual Reasoning | MMMU (val, 0-shot) | 0 | Micro average accuracy | 41.7 | 49.3 |
409
+ | | ChartQA (test) | 0 | Accuracy | 39.4 | 54.2 |
410
+ | | InfographicsQA (val, unseen) | 0 | ANLS | 43.2 | 56.8 |
411
+ | | AI2 Diagram (test) | 0 | Accuracy | 62.4 | 75.3 |
412
+
413
+ ### Instruction Tuned Models
414
+
415
+ | Modality | Capability | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B |
416
+ | ----- | :---: | ----- | :---: | :---: | ----- | ----- |
417
+ | Image | College-level Problems and Mathematical Reasoning | MMMU (val, CoT) | 0 | Micro average accuracy | 50.7 | 60.3 |
418
+ | | | MMMU-Pro, Standard (10 opts, test) | 0 | Accuracy | 33.0 | 45.2 |
419
+ | | | MMMU-Pro, Vision (test) | 0 | Accuracy | 23.7 | 33.8 |
420
+ | | | MathVista (testmini) | 0 | Accuracy | 51.5 | 57.3 |
421
+ | | Charts and Diagram Understanding | ChartQA (test, CoT) | 0 | Relaxed accuracy | 83.4 | 85.5 |
422
+ | | | AI2 Diagram (test) | 0 | Accuracy | 91.1 | 92.3 |
423
+ | | | DocVQA (test) | 0 | ANLS | 88.4 | 90.1 |
424
+ | | General Visual Question Answering | VQAv2 (test) | 0 | Accuracy | 75.2 | 78.1 |
425
+ | | | | | | | |
426
+ | Text | General | MMLU (CoT) | 0 | Macro\_avg/acc | 73.0 | 86.0 |
427
+ | | Math | MATH (CoT) | 0 | Final\_em | 51.9 | 68.0 |
428
+ | | Reasoning | GPQA | 0 | Accuracy | 32.8 | 46.7 |
429
+ | | Multilingual | MGSM (CoT) | 0 | em | 68.9 | 86.9 |
430
+
431
+ ## Responsibility & Safety
432
+
433
+ As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
434
+
435
+ 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
436
+ 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
437
+ 3. Provide protections for the community to help prevent the misuse of our models.
438
+
439
+ ### Responsible Deployment
440
+
441
+ **Approach:** Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more.
442
+
443
+ #### Llama 3.2 Instruct
444
+
445
+ **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
446
+
447
+ **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
448
+
449
+ **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
450
+
451
+ #### Llama 3.2 Systems
452
+
453
+ **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
454
+
455
+ ### New Capabilities and Use Cases
456
+
457
+ **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.,
458
+
459
+ **Image Reasoning:** Llama 3.2-Vision models come with multimodal (text and image) input capabilities enabling image reasoning applications. As part of our responsible release process, we took dedicated measures including evaluations and mitigations to address the risk of the models uniquely identifying individuals in images. As with other LLM risks, models may not always be robust to adversarial prompts, and developers should evaluate identification and other applicable risks in the context of their applications as well as consider deploying Llama Guard 3-11B-Vision as part of their system or other mitigations as appropriate to detect and mitigate such risks.
460
+
461
+ ### Evaluations
462
+
463
+ **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
464
+
465
+ **Red teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
466
+
467
+ ### Critical Risks
468
+
469
+ In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
470
+
471
+ **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** For Llama 3.1, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. For Llama 3.2-Vision models, we conducted additional targeted evaluations and found that it was unlikely Llama 3.2 presented an increase in scientific capabilities due to its added image understanding capability as compared to Llama 3.1.
472
+
473
+ **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
474
+
475
+ **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
476
+ Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s vision capabilities are not generally germane to cyber uplift, we believe that the testing conducted for Llama 3.1 also applies to Llama 3.2.
477
+
478
+ ### Community
479
+
480
+ **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
481
+
482
+ **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
483
+
484
+ **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
485
+
486
+ ## Ethical Considerations and Limitations
487
+
488
+ **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
489
+
490
+ **Testing:** But Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
USE_POLICY.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Llama 3.2** **Acceptable Use Policy**
2
+
3
+ Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).
4
+
5
+ **Prohibited Uses**
6
+
7
+ We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:
8
+
9
+
10
+
11
+ 1. Violate the law or others’ rights, including to:
12
+ 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
13
+ 1. Violence or terrorism
14
+ 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
15
+ 3. Human trafficking, exploitation, and sexual violence
16
+ 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
17
+ 5. Sexual solicitation
18
+ 6. Any other criminal activity
19
+ 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
20
+ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
21
+ 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
22
+ 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law
23
+ 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
24
+ 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
25
+ 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta 
26
+ 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:
27
+ 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997
28
+ 9. Guns and illegal weapons (including weapon development)
29
+ 10. Illegal drugs and regulated/controlled substances
30
+ 11. Operation of critical infrastructure, transportation technologies, or heavy machinery
31
+ 12. Self-harm or harm to others, including suicide, cutting, and eating disorders
32
+ 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
33
+ 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:
34
+ 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
35
+ 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
36
+ 16. Generating, promoting, or further distributing spam
37
+ 17. Impersonating another individual without consent, authorization, or legal right
38
+ 18. Representing that the use of Llama 3.2 or outputs are human-generated
39
+ 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 
40
+ 4. Fail to appropriately disclose to end users any known dangers of your AI system
41
+ 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2
42
+
43
+ With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.
44
+
45
+ Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:
46
+
47
+
48
+
49
+ * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)
50
+ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
51
+ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
52
+ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: LlamaUseReport@meta.com
chat_template - og.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "chat_template": "{{- bos_token }}\n{%- if custom_tools is defined %}\n {%- set tools = custom_tools %}\n{%- endif %}\n{%- if not tools_in_user_message is defined %}\n {%- set tools_in_user_message = true %}\n{%- endif %}\n{%- if not date_string is defined %}\n {%- if strftime_now is defined %}\n {%- set date_string = strftime_now(\"%d %b %Y\") %}\n {%- else %}\n {%- set date_string = \"26 Jul 2024\" %}\n {%- endif %}\n{%- endif %}\n{%- if not tools is defined %}\n {%- set tools = none %}\n{%- endif %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0]['role'] == 'system' %}\n {%- set system_message = messages[0]['content']|trim %}\n {%- set messages = messages[1:] %}\n{%- else %}\n {%- set system_message = \"\" %}\n{%- endif %}\n\n{#- Find out if there are any images #}\n{% set image_ns = namespace(has_images=false) %} \n{%- for message in messages %}\n {%- for content in message['content'] %}\n {%- if content['type'] == 'image' %}\n {%- set image_ns.has_images = true %}\n {%- endif %}\n {%- endfor %}\n{%- endfor %}\n\n{#- Error out if there are images and system message #}\n{%- if image_ns.has_images and not system_message == \"\" %}\n {{- raise_exception(\"Prompting with images is incompatible with system messages.\") }}\n{%- endif %}\n\n{#- System message if there are no images #}\n{%- if not image_ns.has_images %}\n {{- \"<|start_header_id|>system<|end_header_id|>\\n\\n\" }}\n {%- if tools is not none %}\n {{- \"Environment: ipython\\n\" }}\n {%- endif %}\n {{- \"Cutting Knowledge Date: December 2023\\n\" }}\n {{- \"Today Date: \" + date_string + \"\\n\\n\" }}\n {%- if tools is not none and not tools_in_user_message %}\n {{- \"You have access to the following functions. To call a function, please respond with JSON for a function call.\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\\n\\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\\n\\n\" }}\n {%- endfor %}\n {%- endif %}\n {{- system_message }}\n {{- \"<|eot_id|>\" }}\n{%- endif %}\n\n{#- Custom tools are passed in a user message with some extra guidance #}\n{%- if tools_in_user_message and not tools is none %}\n {#- Extract the first user message so we can plug it in here #}\n {%- if messages | length != 0 %}\n {%- set first_user_message = messages[0]['content']|trim %}\n {%- set messages = messages[1:] %}\n {%- else %}\n {{- raise_exception(\"Cannot put tools in the first user message when there's no first user message!\") }}\n{%- endif %}\n {{- '<|start_header_id|>user<|end_header_id|>\\n\\n' -}}\n {{- \"Given the following functions, please respond with a JSON for a function call \" }}\n {{- \"with its proper arguments that best answers the given prompt.\\n\\n\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\\n\\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\\n\\n\" }}\n {%- endfor %}\n {{- first_user_message + \"<|eot_id|>\"}}\n{%- endif %}\n\n{%- for message in messages %}\n {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}\n {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n' }}\n {%- if message['content'] is string %}\n {{- message['content'] }}\n {%- else %}\n {%- for content in message['content'] %}\n {%- if content['type'] == 'image' %}\n {{- '<|image|>' }}\n {%- elif content['type'] == 'text' %}\n {{- content['text'] }}\n {%- endif %}\n {%- endfor %}\n {%- endif %}\n {{- '<|eot_id|>' }}\n {%- elif 'tool_calls' in message %}\n {%- if not message.tool_calls|length == 1 %}\n {{- raise_exception(\"This model only supports single tool-calls at once!\") }}\n {%- endif %}\n {%- set tool_call = message.tool_calls[0].function %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' -}}\n {{- '{\"name\": \"' + tool_call.name + '\", ' }}\n {{- '\"parameters\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- \"}\" }}\n {{- \"<|eot_id|>\" }}\n {%- elif message.role == \"tool\" or message.role == \"ipython\" %}\n {{- \"<|start_header_id|>ipython<|end_header_id|>\\n\\n\" }}\n {%- if message.content is mapping or message.content is iterable %}\n {{- message.content | tojson }}\n {%- else %}\n {{- message.content }}\n {%- endif %}\n {{- \"<|eot_id|>\" }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}\n"
3
+ }
chat_template.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "chat_template": "{{- bos_token }}\n{%- if custom_tools is defined %}\n {%- set tools = custom_tools %}\n{%- endif %}\n{%- if not tools_in_user_message is defined %}\n {%- set tools_in_user_message = true %}\n{%- endif %}\n{%- if not date_string is defined %}\n {%- if strftime_now is defined %}\n {%- set date_string = strftime_now(\"%d %b %Y\") %}\n {%- else %}\n {%- set date_string = \"26 Jul 2024\" %}\n {%- endif %}\n{%- endif %}\n{%- if not tools is defined %}\n {%- set tools = none %}\n{%- endif %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0]['role'] == 'system' %}\n {%- set system_message = messages[0]['content']|trim %}\n {%- set messages = messages[1:] %}\n{%- else %}\n {%- set system_message = \"\" %}\n{%- endif %}\n\n{#- Find out if there are any images #}\n{% set image_ns = namespace(has_images=false) %} \n{%- for message in messages %}\n {%- for content in message['content'] %}\n {%- if content['type'] == 'image' %}\n {%- set image_ns.has_images = true %}\n {%- endif %}\n {%- endfor %}\n{%- endfor %}\n\n{#- Error out if there are images and system message #}\n{%- if image_ns.has_images and not system_message == \"\" %}\n {{- pass }}\n{%- endif %}\n\n{#- System message if there are no images #}\n{%- if true %}\n {{- \"<|start_header_id|>system<|end_header_id|>\\n\\n\" }}\n {%- if tools is not none %}\n {{- \"Environment: ipython\\n\" }}\n {%- endif %}\n {{- \"Cutting Knowledge Date: December 2023\\n\" }}\n {{- \"Today Date: \" + date_string + \"\\n\\n\" }}\n {%- if tools is not none and not tools_in_user_message %}\n {{- \"You have access to the following functions. To call a function, please respond with JSON for a function call.\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\\n\\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\\n\\n\" }}\n {%- endfor %}\n {%- endif %}\n {{- system_message }}\n {{- \"<|eot_id|>\" }}\n{%- endif %}\n\n{#- Custom tools are passed in a user message with some extra guidance #}\n{%- if tools_in_user_message and not tools is none %}\n {#- Extract the first user message so we can plug it in here #}\n {%- if messages | length != 0 %}\n {%- set first_user_message = messages[0]['content']|trim %}\n {%- set messages = messages[1:] %}\n {%- else %}\n {{- raise_exception(\"Cannot put tools in the first user message when there's no first user message!\") }}\n{%- endif %}\n {{- '<|start_header_id|>user<|end_header_id|>\\n\\n' -}}\n {{- \"Given the following functions, please respond with a JSON for a function call \" }}\n {{- \"with its proper arguments that best answers the given prompt.\\n\\n\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\\n\\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\\n\\n\" }}\n {%- endfor %}\n {{- first_user_message + \"<|eot_id|>\"}}\n{%- endif %}\n\n{%- for message in messages %}\n {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}\n {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n' }}\n {%- if message['content'] is string %}\n {{- message['content'] }}\n {%- else %}\n {%- for content in message['content'] %}\n {%- if content['type'] == 'image' %}\n {{- '<|image|>' }}\n {%- elif content['type'] == 'text' %}\n {{- content['text'] }}\n {%- endif %}\n {%- endfor %}\n {%- endif %}\n {{- '<|eot_id|>' }}\n {%- elif 'tool_calls' in message %}\n {%- if not message.tool_calls|length == 1 %}\n {{- raise_exception(\"This model only supports single tool-calls at once!\") }}\n {%- endif %}\n {%- set tool_call = message.tool_calls[0].function %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' -}}\n {{- '{\"name\": \"' + tool_call.name + '\", ' }}\n {{- '\"parameters\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- \"}\" }}\n {{- \"<|eot_id|>\" }}\n {%- elif message.role == \"tool\" or message.role == \"ipython\" %}\n {{- \"<|start_header_id|>ipython<|end_header_id|>\\n\\n\" }}\n {%- if message.content is mapping or message.content is iterable %}\n {{- message.content | tojson }}\n {%- else %}\n {{- message.content }}\n {%- endif %}\n {{- \"<|eot_id|>\" }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}\n"
3
+ }
config.json ADDED
@@ -0,0 +1,224 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./models/meta-llama_Llama-3.2-11B-Vision-Instruct",
3
+ "architectures": [
4
+ "MllamaForConditionalGeneration"
5
+ ],
6
+ "image_token_index": 128256,
7
+ "model_type": "mllama",
8
+ "text_config": {
9
+ "_name_or_path": "",
10
+ "add_cross_attention": false,
11
+ "architectures": null,
12
+ "bad_words_ids": null,
13
+ "begin_suppress_tokens": null,
14
+ "bos_token_id": 128000,
15
+ "chunk_size_feed_forward": 0,
16
+ "cross_attention_hidden_size": null,
17
+ "cross_attention_layers": [
18
+ 3,
19
+ 8,
20
+ 13,
21
+ 18,
22
+ 23,
23
+ 28,
24
+ 33,
25
+ 38
26
+ ],
27
+ "decoder_start_token_id": null,
28
+ "diversity_penalty": 0.0,
29
+ "do_sample": false,
30
+ "dropout": 0,
31
+ "early_stopping": false,
32
+ "encoder_no_repeat_ngram_size": 0,
33
+ "eos_token_id": [
34
+ 128001,
35
+ 128008,
36
+ 128009
37
+ ],
38
+ "exponential_decay_length_penalty": null,
39
+ "finetuning_task": null,
40
+ "forced_bos_token_id": null,
41
+ "forced_eos_token_id": null,
42
+ "hidden_act": "silu",
43
+ "hidden_size": 4096,
44
+ "id2label": {
45
+ "0": "LABEL_0",
46
+ "1": "LABEL_1"
47
+ },
48
+ "initializer_range": 0.02,
49
+ "intermediate_size": 14336,
50
+ "is_decoder": false,
51
+ "is_encoder_decoder": false,
52
+ "label2id": {
53
+ "LABEL_0": 0,
54
+ "LABEL_1": 1
55
+ },
56
+ "length_penalty": 1.0,
57
+ "max_length": 20,
58
+ "max_position_embeddings": 131072,
59
+ "min_length": 0,
60
+ "model_type": "mllama_text_model",
61
+ "no_repeat_ngram_size": 0,
62
+ "num_attention_heads": 32,
63
+ "num_beam_groups": 1,
64
+ "num_beams": 1,
65
+ "num_hidden_layers": 40,
66
+ "num_key_value_heads": 8,
67
+ "num_return_sequences": 1,
68
+ "output_attentions": false,
69
+ "output_hidden_states": false,
70
+ "output_scores": false,
71
+ "pad_token_id": 128004,
72
+ "prefix": null,
73
+ "problem_type": null,
74
+ "pruned_heads": {},
75
+ "remove_invalid_values": false,
76
+ "repetition_penalty": 1.0,
77
+ "return_dict": true,
78
+ "return_dict_in_generate": false,
79
+ "rms_norm_eps": 1e-05,
80
+ "rope_scaling": {
81
+ "factor": 8.0,
82
+ "high_freq_factor": 4.0,
83
+ "low_freq_factor": 1.0,
84
+ "original_max_position_embeddings": 8192,
85
+ "rope_type": "llama3"
86
+ },
87
+ "rope_theta": 500000.0,
88
+ "sep_token_id": null,
89
+ "suppress_tokens": null,
90
+ "task_specific_params": null,
91
+ "temperature": 1.0,
92
+ "tf_legacy_loss": false,
93
+ "tie_encoder_decoder": false,
94
+ "tie_word_embeddings": false,
95
+ "tokenizer_class": null,
96
+ "top_k": 50,
97
+ "top_p": 1.0,
98
+ "torch_dtype": "bfloat16",
99
+ "torchscript": false,
100
+ "typical_p": 1.0,
101
+ "use_bfloat16": false,
102
+ "use_cache": true,
103
+ "vocab_size": 128256
104
+ },
105
+ "torch_dtype": "bfloat16",
106
+ "transformers_version": "4.45.0",
107
+ "vision_config": {
108
+ "_name_or_path": "",
109
+ "add_cross_attention": false,
110
+ "architectures": null,
111
+ "attention_heads": 16,
112
+ "bad_words_ids": null,
113
+ "begin_suppress_tokens": null,
114
+ "bos_token_id": null,
115
+ "chunk_size_feed_forward": 0,
116
+ "cross_attention_hidden_size": null,
117
+ "decoder_start_token_id": null,
118
+ "diversity_penalty": 0.0,
119
+ "do_sample": false,
120
+ "early_stopping": false,
121
+ "encoder_no_repeat_ngram_size": 0,
122
+ "eos_token_id": null,
123
+ "exponential_decay_length_penalty": null,
124
+ "finetuning_task": null,
125
+ "forced_bos_token_id": null,
126
+ "forced_eos_token_id": null,
127
+ "hidden_act": "gelu",
128
+ "hidden_size": 1280,
129
+ "id2label": {
130
+ "0": "LABEL_0",
131
+ "1": "LABEL_1"
132
+ },
133
+ "image_size": 560,
134
+ "initializer_range": 0.02,
135
+ "intermediate_layers_indices": [
136
+ 3,
137
+ 7,
138
+ 15,
139
+ 23,
140
+ 30
141
+ ],
142
+ "intermediate_size": 5120,
143
+ "is_decoder": false,
144
+ "is_encoder_decoder": false,
145
+ "label2id": {
146
+ "LABEL_0": 0,
147
+ "LABEL_1": 1
148
+ },
149
+ "length_penalty": 1.0,
150
+ "max_length": 20,
151
+ "max_num_tiles": 4,
152
+ "min_length": 0,
153
+ "model_type": "mllama_vision_model",
154
+ "no_repeat_ngram_size": 0,
155
+ "norm_eps": 1e-05,
156
+ "num_beam_groups": 1,
157
+ "num_beams": 1,
158
+ "num_channels": 3,
159
+ "num_global_layers": 8,
160
+ "num_hidden_layers": 32,
161
+ "num_return_sequences": 1,
162
+ "output_attentions": false,
163
+ "output_hidden_states": false,
164
+ "output_scores": false,
165
+ "pad_token_id": null,
166
+ "patch_size": 14,
167
+ "prefix": null,
168
+ "problem_type": null,
169
+ "pruned_heads": {},
170
+ "remove_invalid_values": false,
171
+ "repetition_penalty": 1.0,
172
+ "return_dict": true,
173
+ "return_dict_in_generate": false,
174
+ "sep_token_id": null,
175
+ "supported_aspect_ratios": [
176
+ [
177
+ 1,
178
+ 1
179
+ ],
180
+ [
181
+ 1,
182
+ 2
183
+ ],
184
+ [
185
+ 1,
186
+ 3
187
+ ],
188
+ [
189
+ 1,
190
+ 4
191
+ ],
192
+ [
193
+ 2,
194
+ 1
195
+ ],
196
+ [
197
+ 2,
198
+ 2
199
+ ],
200
+ [
201
+ 3,
202
+ 1
203
+ ],
204
+ [
205
+ 4,
206
+ 1
207
+ ]
208
+ ],
209
+ "suppress_tokens": null,
210
+ "task_specific_params": null,
211
+ "temperature": 1.0,
212
+ "tf_legacy_loss": false,
213
+ "tie_encoder_decoder": false,
214
+ "tie_word_embeddings": true,
215
+ "tokenizer_class": null,
216
+ "top_k": 50,
217
+ "top_p": 1.0,
218
+ "torch_dtype": "bfloat16",
219
+ "torchscript": false,
220
+ "typical_p": 1.0,
221
+ "use_bfloat16": false,
222
+ "vision_output_dim": 7680
223
+ }
224
+ }
generation_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 128000,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 128001,
6
+ 128008,
7
+ 128009
8
+ ],
9
+ "pad_token_id": 128004,
10
+ "temperature": 0.6,
11
+ "top_p": 1,
12
+ "min_p": 0.035,
13
+ "transformers_version": "4.45.0"
14
+ }
model-00001-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6311fb27c8f50a10dd6c1b5bab72a34c7b9166db6d4b912f30e9e84d98ceedc5
3
+ size 4992622346
model-00002-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71b837ab964ac664ec95e260e5ccd4e6dd36015a1556318627f7d46cddc8103e
3
+ size 4966251712
model-00003-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:782bd89993ca1c833ac12295f77a1b6995e7c6bed05958130dd70535e99af1ec
3
+ size 4915919704
model-00004-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dade48c2dc1503816a1010aae917600c328e6676bf9a3f27446490426b1f68bf
3
+ size 4999823980
model-00005-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e2bb6ba09007e45e5d257776aab3febddaa0d7df5eca9d391bb232435e90220
3
+ size 1465943128
model.safetensors.index.json ADDED
@@ -0,0 +1,913 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 21340441670
4
+ },
5
+ "weight_map": {
6
+ "language_model.lm_head.weight": "model-00005-of-00005.safetensors",
7
+ "language_model.model.embed_tokens.weight": "model-00001-of-00005.safetensors",
8
+ "language_model.model.layers.0.input_layernorm.weight": "model-00001-of-00005.safetensors",
9
+ "language_model.model.layers.0.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
10
+ "language_model.model.layers.0.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
11
+ "language_model.model.layers.0.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
12
+ "language_model.model.layers.0.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
13
+ "language_model.model.layers.0.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
14
+ "language_model.model.layers.0.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
15
+ "language_model.model.layers.0.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
16
+ "language_model.model.layers.0.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
17
+ "language_model.model.layers.1.input_layernorm.weight": "model-00001-of-00005.safetensors",
18
+ "language_model.model.layers.1.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
19
+ "language_model.model.layers.1.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
20
+ "language_model.model.layers.1.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
21
+ "language_model.model.layers.1.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
22
+ "language_model.model.layers.1.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
23
+ "language_model.model.layers.1.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
24
+ "language_model.model.layers.1.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
25
+ "language_model.model.layers.1.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
26
+ "language_model.model.layers.10.input_layernorm.weight": "model-00002-of-00005.safetensors",
27
+ "language_model.model.layers.10.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
28
+ "language_model.model.layers.10.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
29
+ "language_model.model.layers.10.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
30
+ "language_model.model.layers.10.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
31
+ "language_model.model.layers.10.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
32
+ "language_model.model.layers.10.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
33
+ "language_model.model.layers.10.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
34
+ "language_model.model.layers.10.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
35
+ "language_model.model.layers.11.input_layernorm.weight": "model-00002-of-00005.safetensors",
36
+ "language_model.model.layers.11.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
37
+ "language_model.model.layers.11.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
38
+ "language_model.model.layers.11.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
39
+ "language_model.model.layers.11.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
40
+ "language_model.model.layers.11.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
41
+ "language_model.model.layers.11.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
42
+ "language_model.model.layers.11.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
43
+ "language_model.model.layers.11.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
44
+ "language_model.model.layers.12.input_layernorm.weight": "model-00002-of-00005.safetensors",
45
+ "language_model.model.layers.12.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
46
+ "language_model.model.layers.12.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
47
+ "language_model.model.layers.12.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
48
+ "language_model.model.layers.12.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
49
+ "language_model.model.layers.12.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
50
+ "language_model.model.layers.12.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
51
+ "language_model.model.layers.12.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
52
+ "language_model.model.layers.12.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
53
+ "language_model.model.layers.13.cross_attn.k_norm.weight": "model-00002-of-00005.safetensors",
54
+ "language_model.model.layers.13.cross_attn.k_proj.weight": "model-00002-of-00005.safetensors",
55
+ "language_model.model.layers.13.cross_attn.o_proj.weight": "model-00002-of-00005.safetensors",
56
+ "language_model.model.layers.13.cross_attn.q_norm.weight": "model-00002-of-00005.safetensors",
57
+ "language_model.model.layers.13.cross_attn.q_proj.weight": "model-00002-of-00005.safetensors",
58
+ "language_model.model.layers.13.cross_attn.v_proj.weight": "model-00002-of-00005.safetensors",
59
+ "language_model.model.layers.13.cross_attn_attn_gate": "model-00002-of-00005.safetensors",
60
+ "language_model.model.layers.13.cross_attn_mlp_gate": "model-00002-of-00005.safetensors",
61
+ "language_model.model.layers.13.input_layernorm.weight": "model-00002-of-00005.safetensors",
62
+ "language_model.model.layers.13.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
63
+ "language_model.model.layers.13.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
64
+ "language_model.model.layers.13.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
65
+ "language_model.model.layers.13.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
66
+ "language_model.model.layers.14.input_layernorm.weight": "model-00002-of-00005.safetensors",
67
+ "language_model.model.layers.14.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
68
+ "language_model.model.layers.14.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
69
+ "language_model.model.layers.14.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
70
+ "language_model.model.layers.14.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
71
+ "language_model.model.layers.14.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
72
+ "language_model.model.layers.14.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
73
+ "language_model.model.layers.14.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
74
+ "language_model.model.layers.14.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
75
+ "language_model.model.layers.15.input_layernorm.weight": "model-00002-of-00005.safetensors",
76
+ "language_model.model.layers.15.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
77
+ "language_model.model.layers.15.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
78
+ "language_model.model.layers.15.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
79
+ "language_model.model.layers.15.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
80
+ "language_model.model.layers.15.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
81
+ "language_model.model.layers.15.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
82
+ "language_model.model.layers.15.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
83
+ "language_model.model.layers.15.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
84
+ "language_model.model.layers.16.input_layernorm.weight": "model-00003-of-00005.safetensors",
85
+ "language_model.model.layers.16.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
86
+ "language_model.model.layers.16.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
87
+ "language_model.model.layers.16.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
88
+ "language_model.model.layers.16.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
89
+ "language_model.model.layers.16.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
90
+ "language_model.model.layers.16.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
91
+ "language_model.model.layers.16.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
92
+ "language_model.model.layers.16.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
93
+ "language_model.model.layers.17.input_layernorm.weight": "model-00003-of-00005.safetensors",
94
+ "language_model.model.layers.17.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
95
+ "language_model.model.layers.17.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
96
+ "language_model.model.layers.17.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
97
+ "language_model.model.layers.17.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
98
+ "language_model.model.layers.17.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
99
+ "language_model.model.layers.17.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
100
+ "language_model.model.layers.17.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
101
+ "language_model.model.layers.17.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
102
+ "language_model.model.layers.18.cross_attn.k_norm.weight": "model-00003-of-00005.safetensors",
103
+ "language_model.model.layers.18.cross_attn.k_proj.weight": "model-00003-of-00005.safetensors",
104
+ "language_model.model.layers.18.cross_attn.o_proj.weight": "model-00003-of-00005.safetensors",
105
+ "language_model.model.layers.18.cross_attn.q_norm.weight": "model-00003-of-00005.safetensors",
106
+ "language_model.model.layers.18.cross_attn.q_proj.weight": "model-00003-of-00005.safetensors",
107
+ "language_model.model.layers.18.cross_attn.v_proj.weight": "model-00003-of-00005.safetensors",
108
+ "language_model.model.layers.18.cross_attn_attn_gate": "model-00003-of-00005.safetensors",
109
+ "language_model.model.layers.18.cross_attn_mlp_gate": "model-00003-of-00005.safetensors",
110
+ "language_model.model.layers.18.input_layernorm.weight": "model-00003-of-00005.safetensors",
111
+ "language_model.model.layers.18.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
112
+ "language_model.model.layers.18.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
113
+ "language_model.model.layers.18.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
114
+ "language_model.model.layers.18.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
115
+ "language_model.model.layers.19.input_layernorm.weight": "model-00003-of-00005.safetensors",
116
+ "language_model.model.layers.19.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
117
+ "language_model.model.layers.19.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
118
+ "language_model.model.layers.19.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
119
+ "language_model.model.layers.19.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
120
+ "language_model.model.layers.19.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
121
+ "language_model.model.layers.19.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
122
+ "language_model.model.layers.19.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
123
+ "language_model.model.layers.19.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
124
+ "language_model.model.layers.2.input_layernorm.weight": "model-00001-of-00005.safetensors",
125
+ "language_model.model.layers.2.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
126
+ "language_model.model.layers.2.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
127
+ "language_model.model.layers.2.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
128
+ "language_model.model.layers.2.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
129
+ "language_model.model.layers.2.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
130
+ "language_model.model.layers.2.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
131
+ "language_model.model.layers.2.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
132
+ "language_model.model.layers.2.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
133
+ "language_model.model.layers.20.input_layernorm.weight": "model-00003-of-00005.safetensors",
134
+ "language_model.model.layers.20.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
135
+ "language_model.model.layers.20.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
136
+ "language_model.model.layers.20.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
137
+ "language_model.model.layers.20.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
138
+ "language_model.model.layers.20.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
139
+ "language_model.model.layers.20.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
140
+ "language_model.model.layers.20.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
141
+ "language_model.model.layers.20.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
142
+ "language_model.model.layers.21.input_layernorm.weight": "model-00003-of-00005.safetensors",
143
+ "language_model.model.layers.21.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
144
+ "language_model.model.layers.21.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
145
+ "language_model.model.layers.21.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
146
+ "language_model.model.layers.21.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
147
+ "language_model.model.layers.21.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
148
+ "language_model.model.layers.21.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
149
+ "language_model.model.layers.21.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
150
+ "language_model.model.layers.21.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
151
+ "language_model.model.layers.22.input_layernorm.weight": "model-00003-of-00005.safetensors",
152
+ "language_model.model.layers.22.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
153
+ "language_model.model.layers.22.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
154
+ "language_model.model.layers.22.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
155
+ "language_model.model.layers.22.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
156
+ "language_model.model.layers.22.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
157
+ "language_model.model.layers.22.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
158
+ "language_model.model.layers.22.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
159
+ "language_model.model.layers.22.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
160
+ "language_model.model.layers.23.cross_attn.k_norm.weight": "model-00003-of-00005.safetensors",
161
+ "language_model.model.layers.23.cross_attn.k_proj.weight": "model-00003-of-00005.safetensors",
162
+ "language_model.model.layers.23.cross_attn.o_proj.weight": "model-00003-of-00005.safetensors",
163
+ "language_model.model.layers.23.cross_attn.q_norm.weight": "model-00003-of-00005.safetensors",
164
+ "language_model.model.layers.23.cross_attn.q_proj.weight": "model-00003-of-00005.safetensors",
165
+ "language_model.model.layers.23.cross_attn.v_proj.weight": "model-00003-of-00005.safetensors",
166
+ "language_model.model.layers.23.cross_attn_attn_gate": "model-00003-of-00005.safetensors",
167
+ "language_model.model.layers.23.cross_attn_mlp_gate": "model-00003-of-00005.safetensors",
168
+ "language_model.model.layers.23.input_layernorm.weight": "model-00003-of-00005.safetensors",
169
+ "language_model.model.layers.23.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
170
+ "language_model.model.layers.23.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
171
+ "language_model.model.layers.23.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
172
+ "language_model.model.layers.23.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
173
+ "language_model.model.layers.24.input_layernorm.weight": "model-00003-of-00005.safetensors",
174
+ "language_model.model.layers.24.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
175
+ "language_model.model.layers.24.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
176
+ "language_model.model.layers.24.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
177
+ "language_model.model.layers.24.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
178
+ "language_model.model.layers.24.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
179
+ "language_model.model.layers.24.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
180
+ "language_model.model.layers.24.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
181
+ "language_model.model.layers.24.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
182
+ "language_model.model.layers.25.input_layernorm.weight": "model-00003-of-00005.safetensors",
183
+ "language_model.model.layers.25.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
184
+ "language_model.model.layers.25.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
185
+ "language_model.model.layers.25.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
186
+ "language_model.model.layers.25.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
187
+ "language_model.model.layers.25.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
188
+ "language_model.model.layers.25.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
189
+ "language_model.model.layers.25.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
190
+ "language_model.model.layers.25.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
191
+ "language_model.model.layers.26.input_layernorm.weight": "model-00003-of-00005.safetensors",
192
+ "language_model.model.layers.26.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
193
+ "language_model.model.layers.26.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
194
+ "language_model.model.layers.26.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
195
+ "language_model.model.layers.26.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
196
+ "language_model.model.layers.26.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
197
+ "language_model.model.layers.26.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
198
+ "language_model.model.layers.26.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
199
+ "language_model.model.layers.26.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
200
+ "language_model.model.layers.27.input_layernorm.weight": "model-00004-of-00005.safetensors",
201
+ "language_model.model.layers.27.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
202
+ "language_model.model.layers.27.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
203
+ "language_model.model.layers.27.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
204
+ "language_model.model.layers.27.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
205
+ "language_model.model.layers.27.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
206
+ "language_model.model.layers.27.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
207
+ "language_model.model.layers.27.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
208
+ "language_model.model.layers.27.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
209
+ "language_model.model.layers.28.cross_attn.k_norm.weight": "model-00004-of-00005.safetensors",
210
+ "language_model.model.layers.28.cross_attn.k_proj.weight": "model-00004-of-00005.safetensors",
211
+ "language_model.model.layers.28.cross_attn.o_proj.weight": "model-00004-of-00005.safetensors",
212
+ "language_model.model.layers.28.cross_attn.q_norm.weight": "model-00004-of-00005.safetensors",
213
+ "language_model.model.layers.28.cross_attn.q_proj.weight": "model-00004-of-00005.safetensors",
214
+ "language_model.model.layers.28.cross_attn.v_proj.weight": "model-00004-of-00005.safetensors",
215
+ "language_model.model.layers.28.cross_attn_attn_gate": "model-00004-of-00005.safetensors",
216
+ "language_model.model.layers.28.cross_attn_mlp_gate": "model-00004-of-00005.safetensors",
217
+ "language_model.model.layers.28.input_layernorm.weight": "model-00004-of-00005.safetensors",
218
+ "language_model.model.layers.28.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
219
+ "language_model.model.layers.28.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
220
+ "language_model.model.layers.28.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
221
+ "language_model.model.layers.28.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
222
+ "language_model.model.layers.29.input_layernorm.weight": "model-00004-of-00005.safetensors",
223
+ "language_model.model.layers.29.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
224
+ "language_model.model.layers.29.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
225
+ "language_model.model.layers.29.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
226
+ "language_model.model.layers.29.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
227
+ "language_model.model.layers.29.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
228
+ "language_model.model.layers.29.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
229
+ "language_model.model.layers.29.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
230
+ "language_model.model.layers.29.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
231
+ "language_model.model.layers.3.cross_attn.k_norm.weight": "model-00001-of-00005.safetensors",
232
+ "language_model.model.layers.3.cross_attn.k_proj.weight": "model-00001-of-00005.safetensors",
233
+ "language_model.model.layers.3.cross_attn.o_proj.weight": "model-00001-of-00005.safetensors",
234
+ "language_model.model.layers.3.cross_attn.q_norm.weight": "model-00001-of-00005.safetensors",
235
+ "language_model.model.layers.3.cross_attn.q_proj.weight": "model-00001-of-00005.safetensors",
236
+ "language_model.model.layers.3.cross_attn.v_proj.weight": "model-00001-of-00005.safetensors",
237
+ "language_model.model.layers.3.cross_attn_attn_gate": "model-00001-of-00005.safetensors",
238
+ "language_model.model.layers.3.cross_attn_mlp_gate": "model-00001-of-00005.safetensors",
239
+ "language_model.model.layers.3.input_layernorm.weight": "model-00001-of-00005.safetensors",
240
+ "language_model.model.layers.3.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
241
+ "language_model.model.layers.3.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
242
+ "language_model.model.layers.3.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
243
+ "language_model.model.layers.3.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
244
+ "language_model.model.layers.30.input_layernorm.weight": "model-00004-of-00005.safetensors",
245
+ "language_model.model.layers.30.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
246
+ "language_model.model.layers.30.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
247
+ "language_model.model.layers.30.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
248
+ "language_model.model.layers.30.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
249
+ "language_model.model.layers.30.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
250
+ "language_model.model.layers.30.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
251
+ "language_model.model.layers.30.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
252
+ "language_model.model.layers.30.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
253
+ "language_model.model.layers.31.input_layernorm.weight": "model-00004-of-00005.safetensors",
254
+ "language_model.model.layers.31.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
255
+ "language_model.model.layers.31.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
256
+ "language_model.model.layers.31.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
257
+ "language_model.model.layers.31.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
258
+ "language_model.model.layers.31.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
259
+ "language_model.model.layers.31.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
260
+ "language_model.model.layers.31.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
261
+ "language_model.model.layers.31.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
262
+ "language_model.model.layers.32.input_layernorm.weight": "model-00004-of-00005.safetensors",
263
+ "language_model.model.layers.32.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
264
+ "language_model.model.layers.32.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
265
+ "language_model.model.layers.32.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
266
+ "language_model.model.layers.32.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
267
+ "language_model.model.layers.32.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
268
+ "language_model.model.layers.32.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
269
+ "language_model.model.layers.32.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
270
+ "language_model.model.layers.32.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
271
+ "language_model.model.layers.33.cross_attn.k_norm.weight": "model-00004-of-00005.safetensors",
272
+ "language_model.model.layers.33.cross_attn.k_proj.weight": "model-00004-of-00005.safetensors",
273
+ "language_model.model.layers.33.cross_attn.o_proj.weight": "model-00004-of-00005.safetensors",
274
+ "language_model.model.layers.33.cross_attn.q_norm.weight": "model-00004-of-00005.safetensors",
275
+ "language_model.model.layers.33.cross_attn.q_proj.weight": "model-00004-of-00005.safetensors",
276
+ "language_model.model.layers.33.cross_attn.v_proj.weight": "model-00004-of-00005.safetensors",
277
+ "language_model.model.layers.33.cross_attn_attn_gate": "model-00004-of-00005.safetensors",
278
+ "language_model.model.layers.33.cross_attn_mlp_gate": "model-00004-of-00005.safetensors",
279
+ "language_model.model.layers.33.input_layernorm.weight": "model-00004-of-00005.safetensors",
280
+ "language_model.model.layers.33.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
281
+ "language_model.model.layers.33.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
282
+ "language_model.model.layers.33.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
283
+ "language_model.model.layers.33.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
284
+ "language_model.model.layers.34.input_layernorm.weight": "model-00004-of-00005.safetensors",
285
+ "language_model.model.layers.34.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
286
+ "language_model.model.layers.34.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
287
+ "language_model.model.layers.34.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
288
+ "language_model.model.layers.34.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
289
+ "language_model.model.layers.34.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
290
+ "language_model.model.layers.34.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
291
+ "language_model.model.layers.34.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
292
+ "language_model.model.layers.34.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
293
+ "language_model.model.layers.35.input_layernorm.weight": "model-00004-of-00005.safetensors",
294
+ "language_model.model.layers.35.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
295
+ "language_model.model.layers.35.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
296
+ "language_model.model.layers.35.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
297
+ "language_model.model.layers.35.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
298
+ "language_model.model.layers.35.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
299
+ "language_model.model.layers.35.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
300
+ "language_model.model.layers.35.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
301
+ "language_model.model.layers.35.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
302
+ "language_model.model.layers.36.input_layernorm.weight": "model-00004-of-00005.safetensors",
303
+ "language_model.model.layers.36.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
304
+ "language_model.model.layers.36.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
305
+ "language_model.model.layers.36.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
306
+ "language_model.model.layers.36.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
307
+ "language_model.model.layers.36.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
308
+ "language_model.model.layers.36.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
309
+ "language_model.model.layers.36.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
310
+ "language_model.model.layers.36.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
311
+ "language_model.model.layers.37.input_layernorm.weight": "model-00004-of-00005.safetensors",
312
+ "language_model.model.layers.37.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
313
+ "language_model.model.layers.37.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
314
+ "language_model.model.layers.37.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
315
+ "language_model.model.layers.37.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
316
+ "language_model.model.layers.37.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
317
+ "language_model.model.layers.37.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
318
+ "language_model.model.layers.37.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
319
+ "language_model.model.layers.37.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
320
+ "language_model.model.layers.38.cross_attn.k_norm.weight": "model-00004-of-00005.safetensors",
321
+ "language_model.model.layers.38.cross_attn.k_proj.weight": "model-00004-of-00005.safetensors",
322
+ "language_model.model.layers.38.cross_attn.o_proj.weight": "model-00004-of-00005.safetensors",
323
+ "language_model.model.layers.38.cross_attn.q_norm.weight": "model-00004-of-00005.safetensors",
324
+ "language_model.model.layers.38.cross_attn.q_proj.weight": "model-00004-of-00005.safetensors",
325
+ "language_model.model.layers.38.cross_attn.v_proj.weight": "model-00004-of-00005.safetensors",
326
+ "language_model.model.layers.38.cross_attn_attn_gate": "model-00004-of-00005.safetensors",
327
+ "language_model.model.layers.38.cross_attn_mlp_gate": "model-00004-of-00005.safetensors",
328
+ "language_model.model.layers.38.input_layernorm.weight": "model-00004-of-00005.safetensors",
329
+ "language_model.model.layers.38.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
330
+ "language_model.model.layers.38.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
331
+ "language_model.model.layers.38.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
332
+ "language_model.model.layers.38.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
333
+ "language_model.model.layers.39.input_layernorm.weight": "model-00005-of-00005.safetensors",
334
+ "language_model.model.layers.39.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
335
+ "language_model.model.layers.39.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
336
+ "language_model.model.layers.39.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
337
+ "language_model.model.layers.39.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
338
+ "language_model.model.layers.39.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
339
+ "language_model.model.layers.39.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
340
+ "language_model.model.layers.39.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
341
+ "language_model.model.layers.39.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
342
+ "language_model.model.layers.4.input_layernorm.weight": "model-00001-of-00005.safetensors",
343
+ "language_model.model.layers.4.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
344
+ "language_model.model.layers.4.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
345
+ "language_model.model.layers.4.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
346
+ "language_model.model.layers.4.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
347
+ "language_model.model.layers.4.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
348
+ "language_model.model.layers.4.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
349
+ "language_model.model.layers.4.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
350
+ "language_model.model.layers.4.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
351
+ "language_model.model.layers.5.input_layernorm.weight": "model-00002-of-00005.safetensors",
352
+ "language_model.model.layers.5.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
353
+ "language_model.model.layers.5.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
354
+ "language_model.model.layers.5.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
355
+ "language_model.model.layers.5.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
356
+ "language_model.model.layers.5.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
357
+ "language_model.model.layers.5.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
358
+ "language_model.model.layers.5.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
359
+ "language_model.model.layers.5.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
360
+ "language_model.model.layers.6.input_layernorm.weight": "model-00002-of-00005.safetensors",
361
+ "language_model.model.layers.6.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
362
+ "language_model.model.layers.6.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
363
+ "language_model.model.layers.6.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
364
+ "language_model.model.layers.6.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
365
+ "language_model.model.layers.6.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
366
+ "language_model.model.layers.6.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
367
+ "language_model.model.layers.6.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
368
+ "language_model.model.layers.6.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
369
+ "language_model.model.layers.7.input_layernorm.weight": "model-00002-of-00005.safetensors",
370
+ "language_model.model.layers.7.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
371
+ "language_model.model.layers.7.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
372
+ "language_model.model.layers.7.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
373
+ "language_model.model.layers.7.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
374
+ "language_model.model.layers.7.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
375
+ "language_model.model.layers.7.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
376
+ "language_model.model.layers.7.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
377
+ "language_model.model.layers.7.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
378
+ "language_model.model.layers.8.cross_attn.k_norm.weight": "model-00002-of-00005.safetensors",
379
+ "language_model.model.layers.8.cross_attn.k_proj.weight": "model-00002-of-00005.safetensors",
380
+ "language_model.model.layers.8.cross_attn.o_proj.weight": "model-00002-of-00005.safetensors",
381
+ "language_model.model.layers.8.cross_attn.q_norm.weight": "model-00002-of-00005.safetensors",
382
+ "language_model.model.layers.8.cross_attn.q_proj.weight": "model-00002-of-00005.safetensors",
383
+ "language_model.model.layers.8.cross_attn.v_proj.weight": "model-00002-of-00005.safetensors",
384
+ "language_model.model.layers.8.cross_attn_attn_gate": "model-00002-of-00005.safetensors",
385
+ "language_model.model.layers.8.cross_attn_mlp_gate": "model-00002-of-00005.safetensors",
386
+ "language_model.model.layers.8.input_layernorm.weight": "model-00002-of-00005.safetensors",
387
+ "language_model.model.layers.8.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
388
+ "language_model.model.layers.8.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
389
+ "language_model.model.layers.8.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
390
+ "language_model.model.layers.8.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
391
+ "language_model.model.layers.9.input_layernorm.weight": "model-00002-of-00005.safetensors",
392
+ "language_model.model.layers.9.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
393
+ "language_model.model.layers.9.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
394
+ "language_model.model.layers.9.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
395
+ "language_model.model.layers.9.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
396
+ "language_model.model.layers.9.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
397
+ "language_model.model.layers.9.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
398
+ "language_model.model.layers.9.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
399
+ "language_model.model.layers.9.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
400
+ "language_model.model.norm.weight": "model-00005-of-00005.safetensors",
401
+ "multi_modal_projector.bias": "model-00005-of-00005.safetensors",
402
+ "multi_modal_projector.weight": "model-00005-of-00005.safetensors",
403
+ "vision_model.class_embedding": "model-00001-of-00005.safetensors",
404
+ "vision_model.gated_positional_embedding.embedding": "model-00001-of-00005.safetensors",
405
+ "vision_model.gated_positional_embedding.gate": "model-00001-of-00005.safetensors",
406
+ "vision_model.gated_positional_embedding.tile_embedding.weight": "model-00001-of-00005.safetensors",
407
+ "vision_model.global_transformer.layers.0.gate_attn": "model-00001-of-00005.safetensors",
408
+ "vision_model.global_transformer.layers.0.gate_ffn": "model-00001-of-00005.safetensors",
409
+ "vision_model.global_transformer.layers.0.input_layernorm.bias": "model-00001-of-00005.safetensors",
410
+ "vision_model.global_transformer.layers.0.input_layernorm.weight": "model-00001-of-00005.safetensors",
411
+ "vision_model.global_transformer.layers.0.mlp.fc1.bias": "model-00001-of-00005.safetensors",
412
+ "vision_model.global_transformer.layers.0.mlp.fc1.weight": "model-00001-of-00005.safetensors",
413
+ "vision_model.global_transformer.layers.0.mlp.fc2.bias": "model-00001-of-00005.safetensors",
414
+ "vision_model.global_transformer.layers.0.mlp.fc2.weight": "model-00001-of-00005.safetensors",
415
+ "vision_model.global_transformer.layers.0.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
416
+ "vision_model.global_transformer.layers.0.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
417
+ "vision_model.global_transformer.layers.0.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
418
+ "vision_model.global_transformer.layers.0.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
419
+ "vision_model.global_transformer.layers.0.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
420
+ "vision_model.global_transformer.layers.0.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
421
+ "vision_model.global_transformer.layers.1.gate_attn": "model-00001-of-00005.safetensors",
422
+ "vision_model.global_transformer.layers.1.gate_ffn": "model-00001-of-00005.safetensors",
423
+ "vision_model.global_transformer.layers.1.input_layernorm.bias": "model-00001-of-00005.safetensors",
424
+ "vision_model.global_transformer.layers.1.input_layernorm.weight": "model-00001-of-00005.safetensors",
425
+ "vision_model.global_transformer.layers.1.mlp.fc1.bias": "model-00001-of-00005.safetensors",
426
+ "vision_model.global_transformer.layers.1.mlp.fc1.weight": "model-00001-of-00005.safetensors",
427
+ "vision_model.global_transformer.layers.1.mlp.fc2.bias": "model-00001-of-00005.safetensors",
428
+ "vision_model.global_transformer.layers.1.mlp.fc2.weight": "model-00001-of-00005.safetensors",
429
+ "vision_model.global_transformer.layers.1.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
430
+ "vision_model.global_transformer.layers.1.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
431
+ "vision_model.global_transformer.layers.1.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
432
+ "vision_model.global_transformer.layers.1.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
433
+ "vision_model.global_transformer.layers.1.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
434
+ "vision_model.global_transformer.layers.1.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
435
+ "vision_model.global_transformer.layers.2.gate_attn": "model-00001-of-00005.safetensors",
436
+ "vision_model.global_transformer.layers.2.gate_ffn": "model-00001-of-00005.safetensors",
437
+ "vision_model.global_transformer.layers.2.input_layernorm.bias": "model-00001-of-00005.safetensors",
438
+ "vision_model.global_transformer.layers.2.input_layernorm.weight": "model-00001-of-00005.safetensors",
439
+ "vision_model.global_transformer.layers.2.mlp.fc1.bias": "model-00001-of-00005.safetensors",
440
+ "vision_model.global_transformer.layers.2.mlp.fc1.weight": "model-00001-of-00005.safetensors",
441
+ "vision_model.global_transformer.layers.2.mlp.fc2.bias": "model-00001-of-00005.safetensors",
442
+ "vision_model.global_transformer.layers.2.mlp.fc2.weight": "model-00001-of-00005.safetensors",
443
+ "vision_model.global_transformer.layers.2.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
444
+ "vision_model.global_transformer.layers.2.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
445
+ "vision_model.global_transformer.layers.2.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
446
+ "vision_model.global_transformer.layers.2.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
447
+ "vision_model.global_transformer.layers.2.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
448
+ "vision_model.global_transformer.layers.2.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
449
+ "vision_model.global_transformer.layers.3.gate_attn": "model-00001-of-00005.safetensors",
450
+ "vision_model.global_transformer.layers.3.gate_ffn": "model-00001-of-00005.safetensors",
451
+ "vision_model.global_transformer.layers.3.input_layernorm.bias": "model-00001-of-00005.safetensors",
452
+ "vision_model.global_transformer.layers.3.input_layernorm.weight": "model-00001-of-00005.safetensors",
453
+ "vision_model.global_transformer.layers.3.mlp.fc1.bias": "model-00001-of-00005.safetensors",
454
+ "vision_model.global_transformer.layers.3.mlp.fc1.weight": "model-00001-of-00005.safetensors",
455
+ "vision_model.global_transformer.layers.3.mlp.fc2.bias": "model-00001-of-00005.safetensors",
456
+ "vision_model.global_transformer.layers.3.mlp.fc2.weight": "model-00001-of-00005.safetensors",
457
+ "vision_model.global_transformer.layers.3.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
458
+ "vision_model.global_transformer.layers.3.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
459
+ "vision_model.global_transformer.layers.3.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
460
+ "vision_model.global_transformer.layers.3.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
461
+ "vision_model.global_transformer.layers.3.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
462
+ "vision_model.global_transformer.layers.3.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
463
+ "vision_model.global_transformer.layers.4.gate_attn": "model-00001-of-00005.safetensors",
464
+ "vision_model.global_transformer.layers.4.gate_ffn": "model-00001-of-00005.safetensors",
465
+ "vision_model.global_transformer.layers.4.input_layernorm.bias": "model-00001-of-00005.safetensors",
466
+ "vision_model.global_transformer.layers.4.input_layernorm.weight": "model-00001-of-00005.safetensors",
467
+ "vision_model.global_transformer.layers.4.mlp.fc1.bias": "model-00001-of-00005.safetensors",
468
+ "vision_model.global_transformer.layers.4.mlp.fc1.weight": "model-00001-of-00005.safetensors",
469
+ "vision_model.global_transformer.layers.4.mlp.fc2.bias": "model-00001-of-00005.safetensors",
470
+ "vision_model.global_transformer.layers.4.mlp.fc2.weight": "model-00001-of-00005.safetensors",
471
+ "vision_model.global_transformer.layers.4.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
472
+ "vision_model.global_transformer.layers.4.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
473
+ "vision_model.global_transformer.layers.4.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
474
+ "vision_model.global_transformer.layers.4.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
475
+ "vision_model.global_transformer.layers.4.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
476
+ "vision_model.global_transformer.layers.4.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
477
+ "vision_model.global_transformer.layers.5.gate_attn": "model-00001-of-00005.safetensors",
478
+ "vision_model.global_transformer.layers.5.gate_ffn": "model-00001-of-00005.safetensors",
479
+ "vision_model.global_transformer.layers.5.input_layernorm.bias": "model-00001-of-00005.safetensors",
480
+ "vision_model.global_transformer.layers.5.input_layernorm.weight": "model-00001-of-00005.safetensors",
481
+ "vision_model.global_transformer.layers.5.mlp.fc1.bias": "model-00001-of-00005.safetensors",
482
+ "vision_model.global_transformer.layers.5.mlp.fc1.weight": "model-00001-of-00005.safetensors",
483
+ "vision_model.global_transformer.layers.5.mlp.fc2.bias": "model-00001-of-00005.safetensors",
484
+ "vision_model.global_transformer.layers.5.mlp.fc2.weight": "model-00001-of-00005.safetensors",
485
+ "vision_model.global_transformer.layers.5.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
486
+ "vision_model.global_transformer.layers.5.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
487
+ "vision_model.global_transformer.layers.5.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
488
+ "vision_model.global_transformer.layers.5.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
489
+ "vision_model.global_transformer.layers.5.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
490
+ "vision_model.global_transformer.layers.5.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
491
+ "vision_model.global_transformer.layers.6.gate_attn": "model-00001-of-00005.safetensors",
492
+ "vision_model.global_transformer.layers.6.gate_ffn": "model-00001-of-00005.safetensors",
493
+ "vision_model.global_transformer.layers.6.input_layernorm.bias": "model-00001-of-00005.safetensors",
494
+ "vision_model.global_transformer.layers.6.input_layernorm.weight": "model-00001-of-00005.safetensors",
495
+ "vision_model.global_transformer.layers.6.mlp.fc1.bias": "model-00001-of-00005.safetensors",
496
+ "vision_model.global_transformer.layers.6.mlp.fc1.weight": "model-00001-of-00005.safetensors",
497
+ "vision_model.global_transformer.layers.6.mlp.fc2.bias": "model-00001-of-00005.safetensors",
498
+ "vision_model.global_transformer.layers.6.mlp.fc2.weight": "model-00001-of-00005.safetensors",
499
+ "vision_model.global_transformer.layers.6.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
500
+ "vision_model.global_transformer.layers.6.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
501
+ "vision_model.global_transformer.layers.6.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
502
+ "vision_model.global_transformer.layers.6.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
503
+ "vision_model.global_transformer.layers.6.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
504
+ "vision_model.global_transformer.layers.6.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
505
+ "vision_model.global_transformer.layers.7.gate_attn": "model-00001-of-00005.safetensors",
506
+ "vision_model.global_transformer.layers.7.gate_ffn": "model-00001-of-00005.safetensors",
507
+ "vision_model.global_transformer.layers.7.input_layernorm.bias": "model-00001-of-00005.safetensors",
508
+ "vision_model.global_transformer.layers.7.input_layernorm.weight": "model-00001-of-00005.safetensors",
509
+ "vision_model.global_transformer.layers.7.mlp.fc1.bias": "model-00001-of-00005.safetensors",
510
+ "vision_model.global_transformer.layers.7.mlp.fc1.weight": "model-00001-of-00005.safetensors",
511
+ "vision_model.global_transformer.layers.7.mlp.fc2.bias": "model-00001-of-00005.safetensors",
512
+ "vision_model.global_transformer.layers.7.mlp.fc2.weight": "model-00001-of-00005.safetensors",
513
+ "vision_model.global_transformer.layers.7.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
514
+ "vision_model.global_transformer.layers.7.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
515
+ "vision_model.global_transformer.layers.7.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
516
+ "vision_model.global_transformer.layers.7.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
517
+ "vision_model.global_transformer.layers.7.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
518
+ "vision_model.global_transformer.layers.7.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
519
+ "vision_model.layernorm_post.bias": "model-00001-of-00005.safetensors",
520
+ "vision_model.layernorm_post.weight": "model-00001-of-00005.safetensors",
521
+ "vision_model.layernorm_pre.bias": "model-00001-of-00005.safetensors",
522
+ "vision_model.layernorm_pre.weight": "model-00001-of-00005.safetensors",
523
+ "vision_model.patch_embedding.weight": "model-00001-of-00005.safetensors",
524
+ "vision_model.post_tile_positional_embedding.embedding.weight": "model-00001-of-00005.safetensors",
525
+ "vision_model.post_tile_positional_embedding.gate": "model-00001-of-00005.safetensors",
526
+ "vision_model.pre_tile_positional_embedding.embedding.weight": "model-00001-of-00005.safetensors",
527
+ "vision_model.pre_tile_positional_embedding.gate": "model-00001-of-00005.safetensors",
528
+ "vision_model.transformer.layers.0.input_layernorm.bias": "model-00001-of-00005.safetensors",
529
+ "vision_model.transformer.layers.0.input_layernorm.weight": "model-00001-of-00005.safetensors",
530
+ "vision_model.transformer.layers.0.mlp.fc1.bias": "model-00001-of-00005.safetensors",
531
+ "vision_model.transformer.layers.0.mlp.fc1.weight": "model-00001-of-00005.safetensors",
532
+ "vision_model.transformer.layers.0.mlp.fc2.bias": "model-00001-of-00005.safetensors",
533
+ "vision_model.transformer.layers.0.mlp.fc2.weight": "model-00001-of-00005.safetensors",
534
+ "vision_model.transformer.layers.0.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
535
+ "vision_model.transformer.layers.0.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
536
+ "vision_model.transformer.layers.0.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
537
+ "vision_model.transformer.layers.0.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
538
+ "vision_model.transformer.layers.0.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
539
+ "vision_model.transformer.layers.0.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
540
+ "vision_model.transformer.layers.1.input_layernorm.bias": "model-00001-of-00005.safetensors",
541
+ "vision_model.transformer.layers.1.input_layernorm.weight": "model-00001-of-00005.safetensors",
542
+ "vision_model.transformer.layers.1.mlp.fc1.bias": "model-00001-of-00005.safetensors",
543
+ "vision_model.transformer.layers.1.mlp.fc1.weight": "model-00001-of-00005.safetensors",
544
+ "vision_model.transformer.layers.1.mlp.fc2.bias": "model-00001-of-00005.safetensors",
545
+ "vision_model.transformer.layers.1.mlp.fc2.weight": "model-00001-of-00005.safetensors",
546
+ "vision_model.transformer.layers.1.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
547
+ "vision_model.transformer.layers.1.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
548
+ "vision_model.transformer.layers.1.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
549
+ "vision_model.transformer.layers.1.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
550
+ "vision_model.transformer.layers.1.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
551
+ "vision_model.transformer.layers.1.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
552
+ "vision_model.transformer.layers.10.input_layernorm.bias": "model-00001-of-00005.safetensors",
553
+ "vision_model.transformer.layers.10.input_layernorm.weight": "model-00001-of-00005.safetensors",
554
+ "vision_model.transformer.layers.10.mlp.fc1.bias": "model-00001-of-00005.safetensors",
555
+ "vision_model.transformer.layers.10.mlp.fc1.weight": "model-00001-of-00005.safetensors",
556
+ "vision_model.transformer.layers.10.mlp.fc2.bias": "model-00001-of-00005.safetensors",
557
+ "vision_model.transformer.layers.10.mlp.fc2.weight": "model-00001-of-00005.safetensors",
558
+ "vision_model.transformer.layers.10.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
559
+ "vision_model.transformer.layers.10.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
560
+ "vision_model.transformer.layers.10.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
561
+ "vision_model.transformer.layers.10.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
562
+ "vision_model.transformer.layers.10.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
563
+ "vision_model.transformer.layers.10.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
564
+ "vision_model.transformer.layers.11.input_layernorm.bias": "model-00001-of-00005.safetensors",
565
+ "vision_model.transformer.layers.11.input_layernorm.weight": "model-00001-of-00005.safetensors",
566
+ "vision_model.transformer.layers.11.mlp.fc1.bias": "model-00001-of-00005.safetensors",
567
+ "vision_model.transformer.layers.11.mlp.fc1.weight": "model-00001-of-00005.safetensors",
568
+ "vision_model.transformer.layers.11.mlp.fc2.bias": "model-00001-of-00005.safetensors",
569
+ "vision_model.transformer.layers.11.mlp.fc2.weight": "model-00001-of-00005.safetensors",
570
+ "vision_model.transformer.layers.11.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
571
+ "vision_model.transformer.layers.11.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
572
+ "vision_model.transformer.layers.11.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
573
+ "vision_model.transformer.layers.11.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
574
+ "vision_model.transformer.layers.11.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
575
+ "vision_model.transformer.layers.11.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
576
+ "vision_model.transformer.layers.12.input_layernorm.bias": "model-00001-of-00005.safetensors",
577
+ "vision_model.transformer.layers.12.input_layernorm.weight": "model-00001-of-00005.safetensors",
578
+ "vision_model.transformer.layers.12.mlp.fc1.bias": "model-00001-of-00005.safetensors",
579
+ "vision_model.transformer.layers.12.mlp.fc1.weight": "model-00001-of-00005.safetensors",
580
+ "vision_model.transformer.layers.12.mlp.fc2.bias": "model-00001-of-00005.safetensors",
581
+ "vision_model.transformer.layers.12.mlp.fc2.weight": "model-00001-of-00005.safetensors",
582
+ "vision_model.transformer.layers.12.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
583
+ "vision_model.transformer.layers.12.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
584
+ "vision_model.transformer.layers.12.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
585
+ "vision_model.transformer.layers.12.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
586
+ "vision_model.transformer.layers.12.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
587
+ "vision_model.transformer.layers.12.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
588
+ "vision_model.transformer.layers.13.input_layernorm.bias": "model-00001-of-00005.safetensors",
589
+ "vision_model.transformer.layers.13.input_layernorm.weight": "model-00001-of-00005.safetensors",
590
+ "vision_model.transformer.layers.13.mlp.fc1.bias": "model-00001-of-00005.safetensors",
591
+ "vision_model.transformer.layers.13.mlp.fc1.weight": "model-00001-of-00005.safetensors",
592
+ "vision_model.transformer.layers.13.mlp.fc2.bias": "model-00001-of-00005.safetensors",
593
+ "vision_model.transformer.layers.13.mlp.fc2.weight": "model-00001-of-00005.safetensors",
594
+ "vision_model.transformer.layers.13.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
595
+ "vision_model.transformer.layers.13.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
596
+ "vision_model.transformer.layers.13.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
597
+ "vision_model.transformer.layers.13.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
598
+ "vision_model.transformer.layers.13.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
599
+ "vision_model.transformer.layers.13.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
600
+ "vision_model.transformer.layers.14.input_layernorm.bias": "model-00001-of-00005.safetensors",
601
+ "vision_model.transformer.layers.14.input_layernorm.weight": "model-00001-of-00005.safetensors",
602
+ "vision_model.transformer.layers.14.mlp.fc1.bias": "model-00001-of-00005.safetensors",
603
+ "vision_model.transformer.layers.14.mlp.fc1.weight": "model-00001-of-00005.safetensors",
604
+ "vision_model.transformer.layers.14.mlp.fc2.bias": "model-00001-of-00005.safetensors",
605
+ "vision_model.transformer.layers.14.mlp.fc2.weight": "model-00001-of-00005.safetensors",
606
+ "vision_model.transformer.layers.14.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
607
+ "vision_model.transformer.layers.14.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
608
+ "vision_model.transformer.layers.14.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
609
+ "vision_model.transformer.layers.14.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
610
+ "vision_model.transformer.layers.14.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
611
+ "vision_model.transformer.layers.14.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
612
+ "vision_model.transformer.layers.15.input_layernorm.bias": "model-00001-of-00005.safetensors",
613
+ "vision_model.transformer.layers.15.input_layernorm.weight": "model-00001-of-00005.safetensors",
614
+ "vision_model.transformer.layers.15.mlp.fc1.bias": "model-00001-of-00005.safetensors",
615
+ "vision_model.transformer.layers.15.mlp.fc1.weight": "model-00001-of-00005.safetensors",
616
+ "vision_model.transformer.layers.15.mlp.fc2.bias": "model-00001-of-00005.safetensors",
617
+ "vision_model.transformer.layers.15.mlp.fc2.weight": "model-00001-of-00005.safetensors",
618
+ "vision_model.transformer.layers.15.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
619
+ "vision_model.transformer.layers.15.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
620
+ "vision_model.transformer.layers.15.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
621
+ "vision_model.transformer.layers.15.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
622
+ "vision_model.transformer.layers.15.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
623
+ "vision_model.transformer.layers.15.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
624
+ "vision_model.transformer.layers.16.input_layernorm.bias": "model-00001-of-00005.safetensors",
625
+ "vision_model.transformer.layers.16.input_layernorm.weight": "model-00001-of-00005.safetensors",
626
+ "vision_model.transformer.layers.16.mlp.fc1.bias": "model-00001-of-00005.safetensors",
627
+ "vision_model.transformer.layers.16.mlp.fc1.weight": "model-00001-of-00005.safetensors",
628
+ "vision_model.transformer.layers.16.mlp.fc2.bias": "model-00001-of-00005.safetensors",
629
+ "vision_model.transformer.layers.16.mlp.fc2.weight": "model-00001-of-00005.safetensors",
630
+ "vision_model.transformer.layers.16.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
631
+ "vision_model.transformer.layers.16.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
632
+ "vision_model.transformer.layers.16.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
633
+ "vision_model.transformer.layers.16.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
634
+ "vision_model.transformer.layers.16.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
635
+ "vision_model.transformer.layers.16.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
636
+ "vision_model.transformer.layers.17.input_layernorm.bias": "model-00001-of-00005.safetensors",
637
+ "vision_model.transformer.layers.17.input_layernorm.weight": "model-00001-of-00005.safetensors",
638
+ "vision_model.transformer.layers.17.mlp.fc1.bias": "model-00001-of-00005.safetensors",
639
+ "vision_model.transformer.layers.17.mlp.fc1.weight": "model-00001-of-00005.safetensors",
640
+ "vision_model.transformer.layers.17.mlp.fc2.bias": "model-00001-of-00005.safetensors",
641
+ "vision_model.transformer.layers.17.mlp.fc2.weight": "model-00001-of-00005.safetensors",
642
+ "vision_model.transformer.layers.17.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
643
+ "vision_model.transformer.layers.17.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
644
+ "vision_model.transformer.layers.17.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
645
+ "vision_model.transformer.layers.17.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
646
+ "vision_model.transformer.layers.17.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
647
+ "vision_model.transformer.layers.17.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
648
+ "vision_model.transformer.layers.18.input_layernorm.bias": "model-00001-of-00005.safetensors",
649
+ "vision_model.transformer.layers.18.input_layernorm.weight": "model-00001-of-00005.safetensors",
650
+ "vision_model.transformer.layers.18.mlp.fc1.bias": "model-00001-of-00005.safetensors",
651
+ "vision_model.transformer.layers.18.mlp.fc1.weight": "model-00001-of-00005.safetensors",
652
+ "vision_model.transformer.layers.18.mlp.fc2.bias": "model-00001-of-00005.safetensors",
653
+ "vision_model.transformer.layers.18.mlp.fc2.weight": "model-00001-of-00005.safetensors",
654
+ "vision_model.transformer.layers.18.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
655
+ "vision_model.transformer.layers.18.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
656
+ "vision_model.transformer.layers.18.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
657
+ "vision_model.transformer.layers.18.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
658
+ "vision_model.transformer.layers.18.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
659
+ "vision_model.transformer.layers.18.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
660
+ "vision_model.transformer.layers.19.input_layernorm.bias": "model-00001-of-00005.safetensors",
661
+ "vision_model.transformer.layers.19.input_layernorm.weight": "model-00001-of-00005.safetensors",
662
+ "vision_model.transformer.layers.19.mlp.fc1.bias": "model-00001-of-00005.safetensors",
663
+ "vision_model.transformer.layers.19.mlp.fc1.weight": "model-00001-of-00005.safetensors",
664
+ "vision_model.transformer.layers.19.mlp.fc2.bias": "model-00001-of-00005.safetensors",
665
+ "vision_model.transformer.layers.19.mlp.fc2.weight": "model-00001-of-00005.safetensors",
666
+ "vision_model.transformer.layers.19.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
667
+ "vision_model.transformer.layers.19.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
668
+ "vision_model.transformer.layers.19.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
669
+ "vision_model.transformer.layers.19.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
670
+ "vision_model.transformer.layers.19.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
671
+ "vision_model.transformer.layers.19.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
672
+ "vision_model.transformer.layers.2.input_layernorm.bias": "model-00001-of-00005.safetensors",
673
+ "vision_model.transformer.layers.2.input_layernorm.weight": "model-00001-of-00005.safetensors",
674
+ "vision_model.transformer.layers.2.mlp.fc1.bias": "model-00001-of-00005.safetensors",
675
+ "vision_model.transformer.layers.2.mlp.fc1.weight": "model-00001-of-00005.safetensors",
676
+ "vision_model.transformer.layers.2.mlp.fc2.bias": "model-00001-of-00005.safetensors",
677
+ "vision_model.transformer.layers.2.mlp.fc2.weight": "model-00001-of-00005.safetensors",
678
+ "vision_model.transformer.layers.2.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
679
+ "vision_model.transformer.layers.2.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
680
+ "vision_model.transformer.layers.2.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
681
+ "vision_model.transformer.layers.2.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
682
+ "vision_model.transformer.layers.2.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
683
+ "vision_model.transformer.layers.2.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
684
+ "vision_model.transformer.layers.20.input_layernorm.bias": "model-00001-of-00005.safetensors",
685
+ "vision_model.transformer.layers.20.input_layernorm.weight": "model-00001-of-00005.safetensors",
686
+ "vision_model.transformer.layers.20.mlp.fc1.bias": "model-00001-of-00005.safetensors",
687
+ "vision_model.transformer.layers.20.mlp.fc1.weight": "model-00001-of-00005.safetensors",
688
+ "vision_model.transformer.layers.20.mlp.fc2.bias": "model-00001-of-00005.safetensors",
689
+ "vision_model.transformer.layers.20.mlp.fc2.weight": "model-00001-of-00005.safetensors",
690
+ "vision_model.transformer.layers.20.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
691
+ "vision_model.transformer.layers.20.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
692
+ "vision_model.transformer.layers.20.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
693
+ "vision_model.transformer.layers.20.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
694
+ "vision_model.transformer.layers.20.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
695
+ "vision_model.transformer.layers.20.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
696
+ "vision_model.transformer.layers.21.input_layernorm.bias": "model-00001-of-00005.safetensors",
697
+ "vision_model.transformer.layers.21.input_layernorm.weight": "model-00001-of-00005.safetensors",
698
+ "vision_model.transformer.layers.21.mlp.fc1.bias": "model-00001-of-00005.safetensors",
699
+ "vision_model.transformer.layers.21.mlp.fc1.weight": "model-00001-of-00005.safetensors",
700
+ "vision_model.transformer.layers.21.mlp.fc2.bias": "model-00001-of-00005.safetensors",
701
+ "vision_model.transformer.layers.21.mlp.fc2.weight": "model-00001-of-00005.safetensors",
702
+ "vision_model.transformer.layers.21.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
703
+ "vision_model.transformer.layers.21.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
704
+ "vision_model.transformer.layers.21.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
705
+ "vision_model.transformer.layers.21.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
706
+ "vision_model.transformer.layers.21.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
707
+ "vision_model.transformer.layers.21.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
708
+ "vision_model.transformer.layers.22.input_layernorm.bias": "model-00001-of-00005.safetensors",
709
+ "vision_model.transformer.layers.22.input_layernorm.weight": "model-00001-of-00005.safetensors",
710
+ "vision_model.transformer.layers.22.mlp.fc1.bias": "model-00001-of-00005.safetensors",
711
+ "vision_model.transformer.layers.22.mlp.fc1.weight": "model-00001-of-00005.safetensors",
712
+ "vision_model.transformer.layers.22.mlp.fc2.bias": "model-00001-of-00005.safetensors",
713
+ "vision_model.transformer.layers.22.mlp.fc2.weight": "model-00001-of-00005.safetensors",
714
+ "vision_model.transformer.layers.22.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
715
+ "vision_model.transformer.layers.22.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
716
+ "vision_model.transformer.layers.22.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
717
+ "vision_model.transformer.layers.22.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
718
+ "vision_model.transformer.layers.22.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
719
+ "vision_model.transformer.layers.22.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
720
+ "vision_model.transformer.layers.23.input_layernorm.bias": "model-00001-of-00005.safetensors",
721
+ "vision_model.transformer.layers.23.input_layernorm.weight": "model-00001-of-00005.safetensors",
722
+ "vision_model.transformer.layers.23.mlp.fc1.bias": "model-00001-of-00005.safetensors",
723
+ "vision_model.transformer.layers.23.mlp.fc1.weight": "model-00001-of-00005.safetensors",
724
+ "vision_model.transformer.layers.23.mlp.fc2.bias": "model-00001-of-00005.safetensors",
725
+ "vision_model.transformer.layers.23.mlp.fc2.weight": "model-00001-of-00005.safetensors",
726
+ "vision_model.transformer.layers.23.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
727
+ "vision_model.transformer.layers.23.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
728
+ "vision_model.transformer.layers.23.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
729
+ "vision_model.transformer.layers.23.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
730
+ "vision_model.transformer.layers.23.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
731
+ "vision_model.transformer.layers.23.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
732
+ "vision_model.transformer.layers.24.input_layernorm.bias": "model-00001-of-00005.safetensors",
733
+ "vision_model.transformer.layers.24.input_layernorm.weight": "model-00001-of-00005.safetensors",
734
+ "vision_model.transformer.layers.24.mlp.fc1.bias": "model-00001-of-00005.safetensors",
735
+ "vision_model.transformer.layers.24.mlp.fc1.weight": "model-00001-of-00005.safetensors",
736
+ "vision_model.transformer.layers.24.mlp.fc2.bias": "model-00001-of-00005.safetensors",
737
+ "vision_model.transformer.layers.24.mlp.fc2.weight": "model-00001-of-00005.safetensors",
738
+ "vision_model.transformer.layers.24.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
739
+ "vision_model.transformer.layers.24.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
740
+ "vision_model.transformer.layers.24.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
741
+ "vision_model.transformer.layers.24.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
742
+ "vision_model.transformer.layers.24.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
743
+ "vision_model.transformer.layers.24.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
744
+ "vision_model.transformer.layers.25.input_layernorm.bias": "model-00001-of-00005.safetensors",
745
+ "vision_model.transformer.layers.25.input_layernorm.weight": "model-00001-of-00005.safetensors",
746
+ "vision_model.transformer.layers.25.mlp.fc1.bias": "model-00001-of-00005.safetensors",
747
+ "vision_model.transformer.layers.25.mlp.fc1.weight": "model-00001-of-00005.safetensors",
748
+ "vision_model.transformer.layers.25.mlp.fc2.bias": "model-00001-of-00005.safetensors",
749
+ "vision_model.transformer.layers.25.mlp.fc2.weight": "model-00001-of-00005.safetensors",
750
+ "vision_model.transformer.layers.25.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
751
+ "vision_model.transformer.layers.25.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
752
+ "vision_model.transformer.layers.25.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
753
+ "vision_model.transformer.layers.25.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
754
+ "vision_model.transformer.layers.25.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
755
+ "vision_model.transformer.layers.25.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
756
+ "vision_model.transformer.layers.26.input_layernorm.bias": "model-00001-of-00005.safetensors",
757
+ "vision_model.transformer.layers.26.input_layernorm.weight": "model-00001-of-00005.safetensors",
758
+ "vision_model.transformer.layers.26.mlp.fc1.bias": "model-00001-of-00005.safetensors",
759
+ "vision_model.transformer.layers.26.mlp.fc1.weight": "model-00001-of-00005.safetensors",
760
+ "vision_model.transformer.layers.26.mlp.fc2.bias": "model-00001-of-00005.safetensors",
761
+ "vision_model.transformer.layers.26.mlp.fc2.weight": "model-00001-of-00005.safetensors",
762
+ "vision_model.transformer.layers.26.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
763
+ "vision_model.transformer.layers.26.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
764
+ "vision_model.transformer.layers.26.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
765
+ "vision_model.transformer.layers.26.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
766
+ "vision_model.transformer.layers.26.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
767
+ "vision_model.transformer.layers.26.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
768
+ "vision_model.transformer.layers.27.input_layernorm.bias": "model-00001-of-00005.safetensors",
769
+ "vision_model.transformer.layers.27.input_layernorm.weight": "model-00001-of-00005.safetensors",
770
+ "vision_model.transformer.layers.27.mlp.fc1.bias": "model-00001-of-00005.safetensors",
771
+ "vision_model.transformer.layers.27.mlp.fc1.weight": "model-00001-of-00005.safetensors",
772
+ "vision_model.transformer.layers.27.mlp.fc2.bias": "model-00001-of-00005.safetensors",
773
+ "vision_model.transformer.layers.27.mlp.fc2.weight": "model-00001-of-00005.safetensors",
774
+ "vision_model.transformer.layers.27.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
775
+ "vision_model.transformer.layers.27.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
776
+ "vision_model.transformer.layers.27.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
777
+ "vision_model.transformer.layers.27.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
778
+ "vision_model.transformer.layers.27.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
779
+ "vision_model.transformer.layers.27.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
780
+ "vision_model.transformer.layers.28.input_layernorm.bias": "model-00001-of-00005.safetensors",
781
+ "vision_model.transformer.layers.28.input_layernorm.weight": "model-00001-of-00005.safetensors",
782
+ "vision_model.transformer.layers.28.mlp.fc1.bias": "model-00001-of-00005.safetensors",
783
+ "vision_model.transformer.layers.28.mlp.fc1.weight": "model-00001-of-00005.safetensors",
784
+ "vision_model.transformer.layers.28.mlp.fc2.bias": "model-00001-of-00005.safetensors",
785
+ "vision_model.transformer.layers.28.mlp.fc2.weight": "model-00001-of-00005.safetensors",
786
+ "vision_model.transformer.layers.28.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
787
+ "vision_model.transformer.layers.28.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
788
+ "vision_model.transformer.layers.28.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
789
+ "vision_model.transformer.layers.28.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
790
+ "vision_model.transformer.layers.28.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
791
+ "vision_model.transformer.layers.28.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
792
+ "vision_model.transformer.layers.29.input_layernorm.bias": "model-00001-of-00005.safetensors",
793
+ "vision_model.transformer.layers.29.input_layernorm.weight": "model-00001-of-00005.safetensors",
794
+ "vision_model.transformer.layers.29.mlp.fc1.bias": "model-00001-of-00005.safetensors",
795
+ "vision_model.transformer.layers.29.mlp.fc1.weight": "model-00001-of-00005.safetensors",
796
+ "vision_model.transformer.layers.29.mlp.fc2.bias": "model-00001-of-00005.safetensors",
797
+ "vision_model.transformer.layers.29.mlp.fc2.weight": "model-00001-of-00005.safetensors",
798
+ "vision_model.transformer.layers.29.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
799
+ "vision_model.transformer.layers.29.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
800
+ "vision_model.transformer.layers.29.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
801
+ "vision_model.transformer.layers.29.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
802
+ "vision_model.transformer.layers.29.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
803
+ "vision_model.transformer.layers.29.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
804
+ "vision_model.transformer.layers.3.input_layernorm.bias": "model-00001-of-00005.safetensors",
805
+ "vision_model.transformer.layers.3.input_layernorm.weight": "model-00001-of-00005.safetensors",
806
+ "vision_model.transformer.layers.3.mlp.fc1.bias": "model-00001-of-00005.safetensors",
807
+ "vision_model.transformer.layers.3.mlp.fc1.weight": "model-00001-of-00005.safetensors",
808
+ "vision_model.transformer.layers.3.mlp.fc2.bias": "model-00001-of-00005.safetensors",
809
+ "vision_model.transformer.layers.3.mlp.fc2.weight": "model-00001-of-00005.safetensors",
810
+ "vision_model.transformer.layers.3.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
811
+ "vision_model.transformer.layers.3.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
812
+ "vision_model.transformer.layers.3.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
813
+ "vision_model.transformer.layers.3.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
814
+ "vision_model.transformer.layers.3.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
815
+ "vision_model.transformer.layers.3.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
816
+ "vision_model.transformer.layers.30.input_layernorm.bias": "model-00001-of-00005.safetensors",
817
+ "vision_model.transformer.layers.30.input_layernorm.weight": "model-00001-of-00005.safetensors",
818
+ "vision_model.transformer.layers.30.mlp.fc1.bias": "model-00001-of-00005.safetensors",
819
+ "vision_model.transformer.layers.30.mlp.fc1.weight": "model-00001-of-00005.safetensors",
820
+ "vision_model.transformer.layers.30.mlp.fc2.bias": "model-00001-of-00005.safetensors",
821
+ "vision_model.transformer.layers.30.mlp.fc2.weight": "model-00001-of-00005.safetensors",
822
+ "vision_model.transformer.layers.30.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
823
+ "vision_model.transformer.layers.30.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
824
+ "vision_model.transformer.layers.30.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
825
+ "vision_model.transformer.layers.30.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
826
+ "vision_model.transformer.layers.30.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
827
+ "vision_model.transformer.layers.30.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
828
+ "vision_model.transformer.layers.31.input_layernorm.bias": "model-00001-of-00005.safetensors",
829
+ "vision_model.transformer.layers.31.input_layernorm.weight": "model-00001-of-00005.safetensors",
830
+ "vision_model.transformer.layers.31.mlp.fc1.bias": "model-00001-of-00005.safetensors",
831
+ "vision_model.transformer.layers.31.mlp.fc1.weight": "model-00001-of-00005.safetensors",
832
+ "vision_model.transformer.layers.31.mlp.fc2.bias": "model-00001-of-00005.safetensors",
833
+ "vision_model.transformer.layers.31.mlp.fc2.weight": "model-00001-of-00005.safetensors",
834
+ "vision_model.transformer.layers.31.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
835
+ "vision_model.transformer.layers.31.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
836
+ "vision_model.transformer.layers.31.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
837
+ "vision_model.transformer.layers.31.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
838
+ "vision_model.transformer.layers.31.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
839
+ "vision_model.transformer.layers.31.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
840
+ "vision_model.transformer.layers.4.input_layernorm.bias": "model-00001-of-00005.safetensors",
841
+ "vision_model.transformer.layers.4.input_layernorm.weight": "model-00001-of-00005.safetensors",
842
+ "vision_model.transformer.layers.4.mlp.fc1.bias": "model-00001-of-00005.safetensors",
843
+ "vision_model.transformer.layers.4.mlp.fc1.weight": "model-00001-of-00005.safetensors",
844
+ "vision_model.transformer.layers.4.mlp.fc2.bias": "model-00001-of-00005.safetensors",
845
+ "vision_model.transformer.layers.4.mlp.fc2.weight": "model-00001-of-00005.safetensors",
846
+ "vision_model.transformer.layers.4.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
847
+ "vision_model.transformer.layers.4.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
848
+ "vision_model.transformer.layers.4.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
849
+ "vision_model.transformer.layers.4.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
850
+ "vision_model.transformer.layers.4.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
851
+ "vision_model.transformer.layers.4.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
852
+ "vision_model.transformer.layers.5.input_layernorm.bias": "model-00001-of-00005.safetensors",
853
+ "vision_model.transformer.layers.5.input_layernorm.weight": "model-00001-of-00005.safetensors",
854
+ "vision_model.transformer.layers.5.mlp.fc1.bias": "model-00001-of-00005.safetensors",
855
+ "vision_model.transformer.layers.5.mlp.fc1.weight": "model-00001-of-00005.safetensors",
856
+ "vision_model.transformer.layers.5.mlp.fc2.bias": "model-00001-of-00005.safetensors",
857
+ "vision_model.transformer.layers.5.mlp.fc2.weight": "model-00001-of-00005.safetensors",
858
+ "vision_model.transformer.layers.5.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
859
+ "vision_model.transformer.layers.5.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
860
+ "vision_model.transformer.layers.5.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
861
+ "vision_model.transformer.layers.5.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
862
+ "vision_model.transformer.layers.5.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
863
+ "vision_model.transformer.layers.5.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
864
+ "vision_model.transformer.layers.6.input_layernorm.bias": "model-00001-of-00005.safetensors",
865
+ "vision_model.transformer.layers.6.input_layernorm.weight": "model-00001-of-00005.safetensors",
866
+ "vision_model.transformer.layers.6.mlp.fc1.bias": "model-00001-of-00005.safetensors",
867
+ "vision_model.transformer.layers.6.mlp.fc1.weight": "model-00001-of-00005.safetensors",
868
+ "vision_model.transformer.layers.6.mlp.fc2.bias": "model-00001-of-00005.safetensors",
869
+ "vision_model.transformer.layers.6.mlp.fc2.weight": "model-00001-of-00005.safetensors",
870
+ "vision_model.transformer.layers.6.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
871
+ "vision_model.transformer.layers.6.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
872
+ "vision_model.transformer.layers.6.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
873
+ "vision_model.transformer.layers.6.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
874
+ "vision_model.transformer.layers.6.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
875
+ "vision_model.transformer.layers.6.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
876
+ "vision_model.transformer.layers.7.input_layernorm.bias": "model-00001-of-00005.safetensors",
877
+ "vision_model.transformer.layers.7.input_layernorm.weight": "model-00001-of-00005.safetensors",
878
+ "vision_model.transformer.layers.7.mlp.fc1.bias": "model-00001-of-00005.safetensors",
879
+ "vision_model.transformer.layers.7.mlp.fc1.weight": "model-00001-of-00005.safetensors",
880
+ "vision_model.transformer.layers.7.mlp.fc2.bias": "model-00001-of-00005.safetensors",
881
+ "vision_model.transformer.layers.7.mlp.fc2.weight": "model-00001-of-00005.safetensors",
882
+ "vision_model.transformer.layers.7.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
883
+ "vision_model.transformer.layers.7.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
884
+ "vision_model.transformer.layers.7.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
885
+ "vision_model.transformer.layers.7.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
886
+ "vision_model.transformer.layers.7.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
887
+ "vision_model.transformer.layers.7.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
888
+ "vision_model.transformer.layers.8.input_layernorm.bias": "model-00001-of-00005.safetensors",
889
+ "vision_model.transformer.layers.8.input_layernorm.weight": "model-00001-of-00005.safetensors",
890
+ "vision_model.transformer.layers.8.mlp.fc1.bias": "model-00001-of-00005.safetensors",
891
+ "vision_model.transformer.layers.8.mlp.fc1.weight": "model-00001-of-00005.safetensors",
892
+ "vision_model.transformer.layers.8.mlp.fc2.bias": "model-00001-of-00005.safetensors",
893
+ "vision_model.transformer.layers.8.mlp.fc2.weight": "model-00001-of-00005.safetensors",
894
+ "vision_model.transformer.layers.8.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
895
+ "vision_model.transformer.layers.8.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
896
+ "vision_model.transformer.layers.8.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
897
+ "vision_model.transformer.layers.8.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
898
+ "vision_model.transformer.layers.8.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
899
+ "vision_model.transformer.layers.8.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
900
+ "vision_model.transformer.layers.9.input_layernorm.bias": "model-00001-of-00005.safetensors",
901
+ "vision_model.transformer.layers.9.input_layernorm.weight": "model-00001-of-00005.safetensors",
902
+ "vision_model.transformer.layers.9.mlp.fc1.bias": "model-00001-of-00005.safetensors",
903
+ "vision_model.transformer.layers.9.mlp.fc1.weight": "model-00001-of-00005.safetensors",
904
+ "vision_model.transformer.layers.9.mlp.fc2.bias": "model-00001-of-00005.safetensors",
905
+ "vision_model.transformer.layers.9.mlp.fc2.weight": "model-00001-of-00005.safetensors",
906
+ "vision_model.transformer.layers.9.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
907
+ "vision_model.transformer.layers.9.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
908
+ "vision_model.transformer.layers.9.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
909
+ "vision_model.transformer.layers.9.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
910
+ "vision_model.transformer.layers.9.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
911
+ "vision_model.transformer.layers.9.self_attn.v_proj.weight": "model-00001-of-00005.safetensors"
912
+ }
913
+ }
preprocessor_config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": true,
3
+ "do_normalize": true,
4
+ "do_pad": true,
5
+ "do_rescale": true,
6
+ "do_resize": true,
7
+ "image_mean": [
8
+ 0.48145466,
9
+ 0.4578275,
10
+ 0.40821073
11
+ ],
12
+ "image_processor_type": "MllamaImageProcessor",
13
+ "image_std": [
14
+ 0.26862954,
15
+ 0.26130258,
16
+ 0.27577711
17
+ ],
18
+ "max_image_tiles": 4,
19
+ "resample": 2,
20
+ "rescale_factor": 0.00392156862745098,
21
+ "size": {
22
+ "height": 560,
23
+ "width": 560
24
+ }
25
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|begin_of_text|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|eot_id|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|finetune_right_pad_id|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,2071 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "128000": {
4
+ "content": "<|begin_of_text|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "128001": {
12
+ "content": "<|end_of_text|>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "128002": {
20
+ "content": "<|reserved_special_token_0|>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "128003": {
28
+ "content": "<|reserved_special_token_1|>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "128004": {
36
+ "content": "<|finetune_right_pad_id|>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "128005": {
44
+ "content": "<|step_id|>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "128006": {
52
+ "content": "<|start_header_id|>",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ },
59
+ "128007": {
60
+ "content": "<|end_header_id|>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false,
65
+ "special": true
66
+ },
67
+ "128008": {
68
+ "content": "<|eom_id|>",
69
+ "lstrip": false,
70
+ "normalized": false,
71
+ "rstrip": false,
72
+ "single_word": false,
73
+ "special": true
74
+ },
75
+ "128009": {
76
+ "content": "<|eot_id|>",
77
+ "lstrip": false,
78
+ "normalized": false,
79
+ "rstrip": false,
80
+ "single_word": false,
81
+ "special": true
82
+ },
83
+ "128010": {
84
+ "content": "<|python_tag|>",
85
+ "lstrip": false,
86
+ "normalized": false,
87
+ "rstrip": false,
88
+ "single_word": false,
89
+ "special": true
90
+ },
91
+ "128011": {
92
+ "content": "<|reserved_special_token_2|>",
93
+ "lstrip": false,
94
+ "normalized": false,
95
+ "rstrip": false,
96
+ "single_word": false,
97
+ "special": true
98
+ },
99
+ "128012": {
100
+ "content": "<|reserved_special_token_3|>",
101
+ "lstrip": false,
102
+ "normalized": false,
103
+ "rstrip": false,
104
+ "single_word": false,
105
+ "special": true
106
+ },
107
+ "128013": {
108
+ "content": "<|reserved_special_token_4|>",
109
+ "lstrip": false,
110
+ "normalized": false,
111
+ "rstrip": false,
112
+ "single_word": false,
113
+ "special": true
114
+ },
115
+ "128014": {
116
+ "content": "<|reserved_special_token_5|>",
117
+ "lstrip": false,
118
+ "normalized": false,
119
+ "rstrip": false,
120
+ "single_word": false,
121
+ "special": true
122
+ },
123
+ "128015": {
124
+ "content": "<|reserved_special_token_6|>",
125
+ "lstrip": false,
126
+ "normalized": false,
127
+ "rstrip": false,
128
+ "single_word": false,
129
+ "special": true
130
+ },
131
+ "128016": {
132
+ "content": "<|reserved_special_token_7|>",
133
+ "lstrip": false,
134
+ "normalized": false,
135
+ "rstrip": false,
136
+ "single_word": false,
137
+ "special": true
138
+ },
139
+ "128017": {
140
+ "content": "<|reserved_special_token_8|>",
141
+ "lstrip": false,
142
+ "normalized": false,
143
+ "rstrip": false,
144
+ "single_word": false,
145
+ "special": true
146
+ },
147
+ "128018": {
148
+ "content": "<|reserved_special_token_9|>",
149
+ "lstrip": false,
150
+ "normalized": false,
151
+ "rstrip": false,
152
+ "single_word": false,
153
+ "special": true
154
+ },
155
+ "128019": {
156
+ "content": "<|reserved_special_token_10|>",
157
+ "lstrip": false,
158
+ "normalized": false,
159
+ "rstrip": false,
160
+ "single_word": false,
161
+ "special": true
162
+ },
163
+ "128020": {
164
+ "content": "<|reserved_special_token_11|>",
165
+ "lstrip": false,
166
+ "normalized": false,
167
+ "rstrip": false,
168
+ "single_word": false,
169
+ "special": true
170
+ },
171
+ "128021": {
172
+ "content": "<|reserved_special_token_12|>",
173
+ "lstrip": false,
174
+ "normalized": false,
175
+ "rstrip": false,
176
+ "single_word": false,
177
+ "special": true
178
+ },
179
+ "128022": {
180
+ "content": "<|reserved_special_token_13|>",
181
+ "lstrip": false,
182
+ "normalized": false,
183
+ "rstrip": false,
184
+ "single_word": false,
185
+ "special": true
186
+ },
187
+ "128023": {
188
+ "content": "<|reserved_special_token_14|>",
189
+ "lstrip": false,
190
+ "normalized": false,
191
+ "rstrip": false,
192
+ "single_word": false,
193
+ "special": true
194
+ },
195
+ "128024": {
196
+ "content": "<|reserved_special_token_15|>",
197
+ "lstrip": false,
198
+ "normalized": false,
199
+ "rstrip": false,
200
+ "single_word": false,
201
+ "special": true
202
+ },
203
+ "128025": {
204
+ "content": "<|reserved_special_token_16|>",
205
+ "lstrip": false,
206
+ "normalized": false,
207
+ "rstrip": false,
208
+ "single_word": false,
209
+ "special": true
210
+ },
211
+ "128026": {
212
+ "content": "<|reserved_special_token_17|>",
213
+ "lstrip": false,
214
+ "normalized": false,
215
+ "rstrip": false,
216
+ "single_word": false,
217
+ "special": true
218
+ },
219
+ "128027": {
220
+ "content": "<|reserved_special_token_18|>",
221
+ "lstrip": false,
222
+ "normalized": false,
223
+ "rstrip": false,
224
+ "single_word": false,
225
+ "special": true
226
+ },
227
+ "128028": {
228
+ "content": "<|reserved_special_token_19|>",
229
+ "lstrip": false,
230
+ "normalized": false,
231
+ "rstrip": false,
232
+ "single_word": false,
233
+ "special": true
234
+ },
235
+ "128029": {
236
+ "content": "<|reserved_special_token_20|>",
237
+ "lstrip": false,
238
+ "normalized": false,
239
+ "rstrip": false,
240
+ "single_word": false,
241
+ "special": true
242
+ },
243
+ "128030": {
244
+ "content": "<|reserved_special_token_21|>",
245
+ "lstrip": false,
246
+ "normalized": false,
247
+ "rstrip": false,
248
+ "single_word": false,
249
+ "special": true
250
+ },
251
+ "128031": {
252
+ "content": "<|reserved_special_token_22|>",
253
+ "lstrip": false,
254
+ "normalized": false,
255
+ "rstrip": false,
256
+ "single_word": false,
257
+ "special": true
258
+ },
259
+ "128032": {
260
+ "content": "<|reserved_special_token_23|>",
261
+ "lstrip": false,
262
+ "normalized": false,
263
+ "rstrip": false,
264
+ "single_word": false,
265
+ "special": true
266
+ },
267
+ "128033": {
268
+ "content": "<|reserved_special_token_24|>",
269
+ "lstrip": false,
270
+ "normalized": false,
271
+ "rstrip": false,
272
+ "single_word": false,
273
+ "special": true
274
+ },
275
+ "128034": {
276
+ "content": "<|reserved_special_token_25|>",
277
+ "lstrip": false,
278
+ "normalized": false,
279
+ "rstrip": false,
280
+ "single_word": false,
281
+ "special": true
282
+ },
283
+ "128035": {
284
+ "content": "<|reserved_special_token_26|>",
285
+ "lstrip": false,
286
+ "normalized": false,
287
+ "rstrip": false,
288
+ "single_word": false,
289
+ "special": true
290
+ },
291
+ "128036": {
292
+ "content": "<|reserved_special_token_27|>",
293
+ "lstrip": false,
294
+ "normalized": false,
295
+ "rstrip": false,
296
+ "single_word": false,
297
+ "special": true
298
+ },
299
+ "128037": {
300
+ "content": "<|reserved_special_token_28|>",
301
+ "lstrip": false,
302
+ "normalized": false,
303
+ "rstrip": false,
304
+ "single_word": false,
305
+ "special": true
306
+ },
307
+ "128038": {
308
+ "content": "<|reserved_special_token_29|>",
309
+ "lstrip": false,
310
+ "normalized": false,
311
+ "rstrip": false,
312
+ "single_word": false,
313
+ "special": true
314
+ },
315
+ "128039": {
316
+ "content": "<|reserved_special_token_30|>",
317
+ "lstrip": false,
318
+ "normalized": false,
319
+ "rstrip": false,
320
+ "single_word": false,
321
+ "special": true
322
+ },
323
+ "128040": {
324
+ "content": "<|reserved_special_token_31|>",
325
+ "lstrip": false,
326
+ "normalized": false,
327
+ "rstrip": false,
328
+ "single_word": false,
329
+ "special": true
330
+ },
331
+ "128041": {
332
+ "content": "<|reserved_special_token_32|>",
333
+ "lstrip": false,
334
+ "normalized": false,
335
+ "rstrip": false,
336
+ "single_word": false,
337
+ "special": true
338
+ },
339
+ "128042": {
340
+ "content": "<|reserved_special_token_33|>",
341
+ "lstrip": false,
342
+ "normalized": false,
343
+ "rstrip": false,
344
+ "single_word": false,
345
+ "special": true
346
+ },
347
+ "128043": {
348
+ "content": "<|reserved_special_token_34|>",
349
+ "lstrip": false,
350
+ "normalized": false,
351
+ "rstrip": false,
352
+ "single_word": false,
353
+ "special": true
354
+ },
355
+ "128044": {
356
+ "content": "<|reserved_special_token_35|>",
357
+ "lstrip": false,
358
+ "normalized": false,
359
+ "rstrip": false,
360
+ "single_word": false,
361
+ "special": true
362
+ },
363
+ "128045": {
364
+ "content": "<|reserved_special_token_36|>",
365
+ "lstrip": false,
366
+ "normalized": false,
367
+ "rstrip": false,
368
+ "single_word": false,
369
+ "special": true
370
+ },
371
+ "128046": {
372
+ "content": "<|reserved_special_token_37|>",
373
+ "lstrip": false,
374
+ "normalized": false,
375
+ "rstrip": false,
376
+ "single_word": false,
377
+ "special": true
378
+ },
379
+ "128047": {
380
+ "content": "<|reserved_special_token_38|>",
381
+ "lstrip": false,
382
+ "normalized": false,
383
+ "rstrip": false,
384
+ "single_word": false,
385
+ "special": true
386
+ },
387
+ "128048": {
388
+ "content": "<|reserved_special_token_39|>",
389
+ "lstrip": false,
390
+ "normalized": false,
391
+ "rstrip": false,
392
+ "single_word": false,
393
+ "special": true
394
+ },
395
+ "128049": {
396
+ "content": "<|reserved_special_token_40|>",
397
+ "lstrip": false,
398
+ "normalized": false,
399
+ "rstrip": false,
400
+ "single_word": false,
401
+ "special": true
402
+ },
403
+ "128050": {
404
+ "content": "<|reserved_special_token_41|>",
405
+ "lstrip": false,
406
+ "normalized": false,
407
+ "rstrip": false,
408
+ "single_word": false,
409
+ "special": true
410
+ },
411
+ "128051": {
412
+ "content": "<|reserved_special_token_42|>",
413
+ "lstrip": false,
414
+ "normalized": false,
415
+ "rstrip": false,
416
+ "single_word": false,
417
+ "special": true
418
+ },
419
+ "128052": {
420
+ "content": "<|reserved_special_token_43|>",
421
+ "lstrip": false,
422
+ "normalized": false,
423
+ "rstrip": false,
424
+ "single_word": false,
425
+ "special": true
426
+ },
427
+ "128053": {
428
+ "content": "<|reserved_special_token_44|>",
429
+ "lstrip": false,
430
+ "normalized": false,
431
+ "rstrip": false,
432
+ "single_word": false,
433
+ "special": true
434
+ },
435
+ "128054": {
436
+ "content": "<|reserved_special_token_45|>",
437
+ "lstrip": false,
438
+ "normalized": false,
439
+ "rstrip": false,
440
+ "single_word": false,
441
+ "special": true
442
+ },
443
+ "128055": {
444
+ "content": "<|reserved_special_token_46|>",
445
+ "lstrip": false,
446
+ "normalized": false,
447
+ "rstrip": false,
448
+ "single_word": false,
449
+ "special": true
450
+ },
451
+ "128056": {
452
+ "content": "<|reserved_special_token_47|>",
453
+ "lstrip": false,
454
+ "normalized": false,
455
+ "rstrip": false,
456
+ "single_word": false,
457
+ "special": true
458
+ },
459
+ "128057": {
460
+ "content": "<|reserved_special_token_48|>",
461
+ "lstrip": false,
462
+ "normalized": false,
463
+ "rstrip": false,
464
+ "single_word": false,
465
+ "special": true
466
+ },
467
+ "128058": {
468
+ "content": "<|reserved_special_token_49|>",
469
+ "lstrip": false,
470
+ "normalized": false,
471
+ "rstrip": false,
472
+ "single_word": false,
473
+ "special": true
474
+ },
475
+ "128059": {
476
+ "content": "<|reserved_special_token_50|>",
477
+ "lstrip": false,
478
+ "normalized": false,
479
+ "rstrip": false,
480
+ "single_word": false,
481
+ "special": true
482
+ },
483
+ "128060": {
484
+ "content": "<|reserved_special_token_51|>",
485
+ "lstrip": false,
486
+ "normalized": false,
487
+ "rstrip": false,
488
+ "single_word": false,
489
+ "special": true
490
+ },
491
+ "128061": {
492
+ "content": "<|reserved_special_token_52|>",
493
+ "lstrip": false,
494
+ "normalized": false,
495
+ "rstrip": false,
496
+ "single_word": false,
497
+ "special": true
498
+ },
499
+ "128062": {
500
+ "content": "<|reserved_special_token_53|>",
501
+ "lstrip": false,
502
+ "normalized": false,
503
+ "rstrip": false,
504
+ "single_word": false,
505
+ "special": true
506
+ },
507
+ "128063": {
508
+ "content": "<|reserved_special_token_54|>",
509
+ "lstrip": false,
510
+ "normalized": false,
511
+ "rstrip": false,
512
+ "single_word": false,
513
+ "special": true
514
+ },
515
+ "128064": {
516
+ "content": "<|reserved_special_token_55|>",
517
+ "lstrip": false,
518
+ "normalized": false,
519
+ "rstrip": false,
520
+ "single_word": false,
521
+ "special": true
522
+ },
523
+ "128065": {
524
+ "content": "<|reserved_special_token_56|>",
525
+ "lstrip": false,
526
+ "normalized": false,
527
+ "rstrip": false,
528
+ "single_word": false,
529
+ "special": true
530
+ },
531
+ "128066": {
532
+ "content": "<|reserved_special_token_57|>",
533
+ "lstrip": false,
534
+ "normalized": false,
535
+ "rstrip": false,
536
+ "single_word": false,
537
+ "special": true
538
+ },
539
+ "128067": {
540
+ "content": "<|reserved_special_token_58|>",
541
+ "lstrip": false,
542
+ "normalized": false,
543
+ "rstrip": false,
544
+ "single_word": false,
545
+ "special": true
546
+ },
547
+ "128068": {
548
+ "content": "<|reserved_special_token_59|>",
549
+ "lstrip": false,
550
+ "normalized": false,
551
+ "rstrip": false,
552
+ "single_word": false,
553
+ "special": true
554
+ },
555
+ "128069": {
556
+ "content": "<|reserved_special_token_60|>",
557
+ "lstrip": false,
558
+ "normalized": false,
559
+ "rstrip": false,
560
+ "single_word": false,
561
+ "special": true
562
+ },
563
+ "128070": {
564
+ "content": "<|reserved_special_token_61|>",
565
+ "lstrip": false,
566
+ "normalized": false,
567
+ "rstrip": false,
568
+ "single_word": false,
569
+ "special": true
570
+ },
571
+ "128071": {
572
+ "content": "<|reserved_special_token_62|>",
573
+ "lstrip": false,
574
+ "normalized": false,
575
+ "rstrip": false,
576
+ "single_word": false,
577
+ "special": true
578
+ },
579
+ "128072": {
580
+ "content": "<|reserved_special_token_63|>",
581
+ "lstrip": false,
582
+ "normalized": false,
583
+ "rstrip": false,
584
+ "single_word": false,
585
+ "special": true
586
+ },
587
+ "128073": {
588
+ "content": "<|reserved_special_token_64|>",
589
+ "lstrip": false,
590
+ "normalized": false,
591
+ "rstrip": false,
592
+ "single_word": false,
593
+ "special": true
594
+ },
595
+ "128074": {
596
+ "content": "<|reserved_special_token_65|>",
597
+ "lstrip": false,
598
+ "normalized": false,
599
+ "rstrip": false,
600
+ "single_word": false,
601
+ "special": true
602
+ },
603
+ "128075": {
604
+ "content": "<|reserved_special_token_66|>",
605
+ "lstrip": false,
606
+ "normalized": false,
607
+ "rstrip": false,
608
+ "single_word": false,
609
+ "special": true
610
+ },
611
+ "128076": {
612
+ "content": "<|reserved_special_token_67|>",
613
+ "lstrip": false,
614
+ "normalized": false,
615
+ "rstrip": false,
616
+ "single_word": false,
617
+ "special": true
618
+ },
619
+ "128077": {
620
+ "content": "<|reserved_special_token_68|>",
621
+ "lstrip": false,
622
+ "normalized": false,
623
+ "rstrip": false,
624
+ "single_word": false,
625
+ "special": true
626
+ },
627
+ "128078": {
628
+ "content": "<|reserved_special_token_69|>",
629
+ "lstrip": false,
630
+ "normalized": false,
631
+ "rstrip": false,
632
+ "single_word": false,
633
+ "special": true
634
+ },
635
+ "128079": {
636
+ "content": "<|reserved_special_token_70|>",
637
+ "lstrip": false,
638
+ "normalized": false,
639
+ "rstrip": false,
640
+ "single_word": false,
641
+ "special": true
642
+ },
643
+ "128080": {
644
+ "content": "<|reserved_special_token_71|>",
645
+ "lstrip": false,
646
+ "normalized": false,
647
+ "rstrip": false,
648
+ "single_word": false,
649
+ "special": true
650
+ },
651
+ "128081": {
652
+ "content": "<|reserved_special_token_72|>",
653
+ "lstrip": false,
654
+ "normalized": false,
655
+ "rstrip": false,
656
+ "single_word": false,
657
+ "special": true
658
+ },
659
+ "128082": {
660
+ "content": "<|reserved_special_token_73|>",
661
+ "lstrip": false,
662
+ "normalized": false,
663
+ "rstrip": false,
664
+ "single_word": false,
665
+ "special": true
666
+ },
667
+ "128083": {
668
+ "content": "<|reserved_special_token_74|>",
669
+ "lstrip": false,
670
+ "normalized": false,
671
+ "rstrip": false,
672
+ "single_word": false,
673
+ "special": true
674
+ },
675
+ "128084": {
676
+ "content": "<|reserved_special_token_75|>",
677
+ "lstrip": false,
678
+ "normalized": false,
679
+ "rstrip": false,
680
+ "single_word": false,
681
+ "special": true
682
+ },
683
+ "128085": {
684
+ "content": "<|reserved_special_token_76|>",
685
+ "lstrip": false,
686
+ "normalized": false,
687
+ "rstrip": false,
688
+ "single_word": false,
689
+ "special": true
690
+ },
691
+ "128086": {
692
+ "content": "<|reserved_special_token_77|>",
693
+ "lstrip": false,
694
+ "normalized": false,
695
+ "rstrip": false,
696
+ "single_word": false,
697
+ "special": true
698
+ },
699
+ "128087": {
700
+ "content": "<|reserved_special_token_78|>",
701
+ "lstrip": false,
702
+ "normalized": false,
703
+ "rstrip": false,
704
+ "single_word": false,
705
+ "special": true
706
+ },
707
+ "128088": {
708
+ "content": "<|reserved_special_token_79|>",
709
+ "lstrip": false,
710
+ "normalized": false,
711
+ "rstrip": false,
712
+ "single_word": false,
713
+ "special": true
714
+ },
715
+ "128089": {
716
+ "content": "<|reserved_special_token_80|>",
717
+ "lstrip": false,
718
+ "normalized": false,
719
+ "rstrip": false,
720
+ "single_word": false,
721
+ "special": true
722
+ },
723
+ "128090": {
724
+ "content": "<|reserved_special_token_81|>",
725
+ "lstrip": false,
726
+ "normalized": false,
727
+ "rstrip": false,
728
+ "single_word": false,
729
+ "special": true
730
+ },
731
+ "128091": {
732
+ "content": "<|reserved_special_token_82|>",
733
+ "lstrip": false,
734
+ "normalized": false,
735
+ "rstrip": false,
736
+ "single_word": false,
737
+ "special": true
738
+ },
739
+ "128092": {
740
+ "content": "<|reserved_special_token_83|>",
741
+ "lstrip": false,
742
+ "normalized": false,
743
+ "rstrip": false,
744
+ "single_word": false,
745
+ "special": true
746
+ },
747
+ "128093": {
748
+ "content": "<|reserved_special_token_84|>",
749
+ "lstrip": false,
750
+ "normalized": false,
751
+ "rstrip": false,
752
+ "single_word": false,
753
+ "special": true
754
+ },
755
+ "128094": {
756
+ "content": "<|reserved_special_token_85|>",
757
+ "lstrip": false,
758
+ "normalized": false,
759
+ "rstrip": false,
760
+ "single_word": false,
761
+ "special": true
762
+ },
763
+ "128095": {
764
+ "content": "<|reserved_special_token_86|>",
765
+ "lstrip": false,
766
+ "normalized": false,
767
+ "rstrip": false,
768
+ "single_word": false,
769
+ "special": true
770
+ },
771
+ "128096": {
772
+ "content": "<|reserved_special_token_87|>",
773
+ "lstrip": false,
774
+ "normalized": false,
775
+ "rstrip": false,
776
+ "single_word": false,
777
+ "special": true
778
+ },
779
+ "128097": {
780
+ "content": "<|reserved_special_token_88|>",
781
+ "lstrip": false,
782
+ "normalized": false,
783
+ "rstrip": false,
784
+ "single_word": false,
785
+ "special": true
786
+ },
787
+ "128098": {
788
+ "content": "<|reserved_special_token_89|>",
789
+ "lstrip": false,
790
+ "normalized": false,
791
+ "rstrip": false,
792
+ "single_word": false,
793
+ "special": true
794
+ },
795
+ "128099": {
796
+ "content": "<|reserved_special_token_90|>",
797
+ "lstrip": false,
798
+ "normalized": false,
799
+ "rstrip": false,
800
+ "single_word": false,
801
+ "special": true
802
+ },
803
+ "128100": {
804
+ "content": "<|reserved_special_token_91|>",
805
+ "lstrip": false,
806
+ "normalized": false,
807
+ "rstrip": false,
808
+ "single_word": false,
809
+ "special": true
810
+ },
811
+ "128101": {
812
+ "content": "<|reserved_special_token_92|>",
813
+ "lstrip": false,
814
+ "normalized": false,
815
+ "rstrip": false,
816
+ "single_word": false,
817
+ "special": true
818
+ },
819
+ "128102": {
820
+ "content": "<|reserved_special_token_93|>",
821
+ "lstrip": false,
822
+ "normalized": false,
823
+ "rstrip": false,
824
+ "single_word": false,
825
+ "special": true
826
+ },
827
+ "128103": {
828
+ "content": "<|reserved_special_token_94|>",
829
+ "lstrip": false,
830
+ "normalized": false,
831
+ "rstrip": false,
832
+ "single_word": false,
833
+ "special": true
834
+ },
835
+ "128104": {
836
+ "content": "<|reserved_special_token_95|>",
837
+ "lstrip": false,
838
+ "normalized": false,
839
+ "rstrip": false,
840
+ "single_word": false,
841
+ "special": true
842
+ },
843
+ "128105": {
844
+ "content": "<|reserved_special_token_96|>",
845
+ "lstrip": false,
846
+ "normalized": false,
847
+ "rstrip": false,
848
+ "single_word": false,
849
+ "special": true
850
+ },
851
+ "128106": {
852
+ "content": "<|reserved_special_token_97|>",
853
+ "lstrip": false,
854
+ "normalized": false,
855
+ "rstrip": false,
856
+ "single_word": false,
857
+ "special": true
858
+ },
859
+ "128107": {
860
+ "content": "<|reserved_special_token_98|>",
861
+ "lstrip": false,
862
+ "normalized": false,
863
+ "rstrip": false,
864
+ "single_word": false,
865
+ "special": true
866
+ },
867
+ "128108": {
868
+ "content": "<|reserved_special_token_99|>",
869
+ "lstrip": false,
870
+ "normalized": false,
871
+ "rstrip": false,
872
+ "single_word": false,
873
+ "special": true
874
+ },
875
+ "128109": {
876
+ "content": "<|reserved_special_token_100|>",
877
+ "lstrip": false,
878
+ "normalized": false,
879
+ "rstrip": false,
880
+ "single_word": false,
881
+ "special": true
882
+ },
883
+ "128110": {
884
+ "content": "<|reserved_special_token_101|>",
885
+ "lstrip": false,
886
+ "normalized": false,
887
+ "rstrip": false,
888
+ "single_word": false,
889
+ "special": true
890
+ },
891
+ "128111": {
892
+ "content": "<|reserved_special_token_102|>",
893
+ "lstrip": false,
894
+ "normalized": false,
895
+ "rstrip": false,
896
+ "single_word": false,
897
+ "special": true
898
+ },
899
+ "128112": {
900
+ "content": "<|reserved_special_token_103|>",
901
+ "lstrip": false,
902
+ "normalized": false,
903
+ "rstrip": false,
904
+ "single_word": false,
905
+ "special": true
906
+ },
907
+ "128113": {
908
+ "content": "<|reserved_special_token_104|>",
909
+ "lstrip": false,
910
+ "normalized": false,
911
+ "rstrip": false,
912
+ "single_word": false,
913
+ "special": true
914
+ },
915
+ "128114": {
916
+ "content": "<|reserved_special_token_105|>",
917
+ "lstrip": false,
918
+ "normalized": false,
919
+ "rstrip": false,
920
+ "single_word": false,
921
+ "special": true
922
+ },
923
+ "128115": {
924
+ "content": "<|reserved_special_token_106|>",
925
+ "lstrip": false,
926
+ "normalized": false,
927
+ "rstrip": false,
928
+ "single_word": false,
929
+ "special": true
930
+ },
931
+ "128116": {
932
+ "content": "<|reserved_special_token_107|>",
933
+ "lstrip": false,
934
+ "normalized": false,
935
+ "rstrip": false,
936
+ "single_word": false,
937
+ "special": true
938
+ },
939
+ "128117": {
940
+ "content": "<|reserved_special_token_108|>",
941
+ "lstrip": false,
942
+ "normalized": false,
943
+ "rstrip": false,
944
+ "single_word": false,
945
+ "special": true
946
+ },
947
+ "128118": {
948
+ "content": "<|reserved_special_token_109|>",
949
+ "lstrip": false,
950
+ "normalized": false,
951
+ "rstrip": false,
952
+ "single_word": false,
953
+ "special": true
954
+ },
955
+ "128119": {
956
+ "content": "<|reserved_special_token_110|>",
957
+ "lstrip": false,
958
+ "normalized": false,
959
+ "rstrip": false,
960
+ "single_word": false,
961
+ "special": true
962
+ },
963
+ "128120": {
964
+ "content": "<|reserved_special_token_111|>",
965
+ "lstrip": false,
966
+ "normalized": false,
967
+ "rstrip": false,
968
+ "single_word": false,
969
+ "special": true
970
+ },
971
+ "128121": {
972
+ "content": "<|reserved_special_token_112|>",
973
+ "lstrip": false,
974
+ "normalized": false,
975
+ "rstrip": false,
976
+ "single_word": false,
977
+ "special": true
978
+ },
979
+ "128122": {
980
+ "content": "<|reserved_special_token_113|>",
981
+ "lstrip": false,
982
+ "normalized": false,
983
+ "rstrip": false,
984
+ "single_word": false,
985
+ "special": true
986
+ },
987
+ "128123": {
988
+ "content": "<|reserved_special_token_114|>",
989
+ "lstrip": false,
990
+ "normalized": false,
991
+ "rstrip": false,
992
+ "single_word": false,
993
+ "special": true
994
+ },
995
+ "128124": {
996
+ "content": "<|reserved_special_token_115|>",
997
+ "lstrip": false,
998
+ "normalized": false,
999
+ "rstrip": false,
1000
+ "single_word": false,
1001
+ "special": true
1002
+ },
1003
+ "128125": {
1004
+ "content": "<|reserved_special_token_116|>",
1005
+ "lstrip": false,
1006
+ "normalized": false,
1007
+ "rstrip": false,
1008
+ "single_word": false,
1009
+ "special": true
1010
+ },
1011
+ "128126": {
1012
+ "content": "<|reserved_special_token_117|>",
1013
+ "lstrip": false,
1014
+ "normalized": false,
1015
+ "rstrip": false,
1016
+ "single_word": false,
1017
+ "special": true
1018
+ },
1019
+ "128127": {
1020
+ "content": "<|reserved_special_token_118|>",
1021
+ "lstrip": false,
1022
+ "normalized": false,
1023
+ "rstrip": false,
1024
+ "single_word": false,
1025
+ "special": true
1026
+ },
1027
+ "128128": {
1028
+ "content": "<|reserved_special_token_119|>",
1029
+ "lstrip": false,
1030
+ "normalized": false,
1031
+ "rstrip": false,
1032
+ "single_word": false,
1033
+ "special": true
1034
+ },
1035
+ "128129": {
1036
+ "content": "<|reserved_special_token_120|>",
1037
+ "lstrip": false,
1038
+ "normalized": false,
1039
+ "rstrip": false,
1040
+ "single_word": false,
1041
+ "special": true
1042
+ },
1043
+ "128130": {
1044
+ "content": "<|reserved_special_token_121|>",
1045
+ "lstrip": false,
1046
+ "normalized": false,
1047
+ "rstrip": false,
1048
+ "single_word": false,
1049
+ "special": true
1050
+ },
1051
+ "128131": {
1052
+ "content": "<|reserved_special_token_122|>",
1053
+ "lstrip": false,
1054
+ "normalized": false,
1055
+ "rstrip": false,
1056
+ "single_word": false,
1057
+ "special": true
1058
+ },
1059
+ "128132": {
1060
+ "content": "<|reserved_special_token_123|>",
1061
+ "lstrip": false,
1062
+ "normalized": false,
1063
+ "rstrip": false,
1064
+ "single_word": false,
1065
+ "special": true
1066
+ },
1067
+ "128133": {
1068
+ "content": "<|reserved_special_token_124|>",
1069
+ "lstrip": false,
1070
+ "normalized": false,
1071
+ "rstrip": false,
1072
+ "single_word": false,
1073
+ "special": true
1074
+ },
1075
+ "128134": {
1076
+ "content": "<|reserved_special_token_125|>",
1077
+ "lstrip": false,
1078
+ "normalized": false,
1079
+ "rstrip": false,
1080
+ "single_word": false,
1081
+ "special": true
1082
+ },
1083
+ "128135": {
1084
+ "content": "<|reserved_special_token_126|>",
1085
+ "lstrip": false,
1086
+ "normalized": false,
1087
+ "rstrip": false,
1088
+ "single_word": false,
1089
+ "special": true
1090
+ },
1091
+ "128136": {
1092
+ "content": "<|reserved_special_token_127|>",
1093
+ "lstrip": false,
1094
+ "normalized": false,
1095
+ "rstrip": false,
1096
+ "single_word": false,
1097
+ "special": true
1098
+ },
1099
+ "128137": {
1100
+ "content": "<|reserved_special_token_128|>",
1101
+ "lstrip": false,
1102
+ "normalized": false,
1103
+ "rstrip": false,
1104
+ "single_word": false,
1105
+ "special": true
1106
+ },
1107
+ "128138": {
1108
+ "content": "<|reserved_special_token_129|>",
1109
+ "lstrip": false,
1110
+ "normalized": false,
1111
+ "rstrip": false,
1112
+ "single_word": false,
1113
+ "special": true
1114
+ },
1115
+ "128139": {
1116
+ "content": "<|reserved_special_token_130|>",
1117
+ "lstrip": false,
1118
+ "normalized": false,
1119
+ "rstrip": false,
1120
+ "single_word": false,
1121
+ "special": true
1122
+ },
1123
+ "128140": {
1124
+ "content": "<|reserved_special_token_131|>",
1125
+ "lstrip": false,
1126
+ "normalized": false,
1127
+ "rstrip": false,
1128
+ "single_word": false,
1129
+ "special": true
1130
+ },
1131
+ "128141": {
1132
+ "content": "<|reserved_special_token_132|>",
1133
+ "lstrip": false,
1134
+ "normalized": false,
1135
+ "rstrip": false,
1136
+ "single_word": false,
1137
+ "special": true
1138
+ },
1139
+ "128142": {
1140
+ "content": "<|reserved_special_token_133|>",
1141
+ "lstrip": false,
1142
+ "normalized": false,
1143
+ "rstrip": false,
1144
+ "single_word": false,
1145
+ "special": true
1146
+ },
1147
+ "128143": {
1148
+ "content": "<|reserved_special_token_134|>",
1149
+ "lstrip": false,
1150
+ "normalized": false,
1151
+ "rstrip": false,
1152
+ "single_word": false,
1153
+ "special": true
1154
+ },
1155
+ "128144": {
1156
+ "content": "<|reserved_special_token_135|>",
1157
+ "lstrip": false,
1158
+ "normalized": false,
1159
+ "rstrip": false,
1160
+ "single_word": false,
1161
+ "special": true
1162
+ },
1163
+ "128145": {
1164
+ "content": "<|reserved_special_token_136|>",
1165
+ "lstrip": false,
1166
+ "normalized": false,
1167
+ "rstrip": false,
1168
+ "single_word": false,
1169
+ "special": true
1170
+ },
1171
+ "128146": {
1172
+ "content": "<|reserved_special_token_137|>",
1173
+ "lstrip": false,
1174
+ "normalized": false,
1175
+ "rstrip": false,
1176
+ "single_word": false,
1177
+ "special": true
1178
+ },
1179
+ "128147": {
1180
+ "content": "<|reserved_special_token_138|>",
1181
+ "lstrip": false,
1182
+ "normalized": false,
1183
+ "rstrip": false,
1184
+ "single_word": false,
1185
+ "special": true
1186
+ },
1187
+ "128148": {
1188
+ "content": "<|reserved_special_token_139|>",
1189
+ "lstrip": false,
1190
+ "normalized": false,
1191
+ "rstrip": false,
1192
+ "single_word": false,
1193
+ "special": true
1194
+ },
1195
+ "128149": {
1196
+ "content": "<|reserved_special_token_140|>",
1197
+ "lstrip": false,
1198
+ "normalized": false,
1199
+ "rstrip": false,
1200
+ "single_word": false,
1201
+ "special": true
1202
+ },
1203
+ "128150": {
1204
+ "content": "<|reserved_special_token_141|>",
1205
+ "lstrip": false,
1206
+ "normalized": false,
1207
+ "rstrip": false,
1208
+ "single_word": false,
1209
+ "special": true
1210
+ },
1211
+ "128151": {
1212
+ "content": "<|reserved_special_token_142|>",
1213
+ "lstrip": false,
1214
+ "normalized": false,
1215
+ "rstrip": false,
1216
+ "single_word": false,
1217
+ "special": true
1218
+ },
1219
+ "128152": {
1220
+ "content": "<|reserved_special_token_143|>",
1221
+ "lstrip": false,
1222
+ "normalized": false,
1223
+ "rstrip": false,
1224
+ "single_word": false,
1225
+ "special": true
1226
+ },
1227
+ "128153": {
1228
+ "content": "<|reserved_special_token_144|>",
1229
+ "lstrip": false,
1230
+ "normalized": false,
1231
+ "rstrip": false,
1232
+ "single_word": false,
1233
+ "special": true
1234
+ },
1235
+ "128154": {
1236
+ "content": "<|reserved_special_token_145|>",
1237
+ "lstrip": false,
1238
+ "normalized": false,
1239
+ "rstrip": false,
1240
+ "single_word": false,
1241
+ "special": true
1242
+ },
1243
+ "128155": {
1244
+ "content": "<|reserved_special_token_146|>",
1245
+ "lstrip": false,
1246
+ "normalized": false,
1247
+ "rstrip": false,
1248
+ "single_word": false,
1249
+ "special": true
1250
+ },
1251
+ "128156": {
1252
+ "content": "<|reserved_special_token_147|>",
1253
+ "lstrip": false,
1254
+ "normalized": false,
1255
+ "rstrip": false,
1256
+ "single_word": false,
1257
+ "special": true
1258
+ },
1259
+ "128157": {
1260
+ "content": "<|reserved_special_token_148|>",
1261
+ "lstrip": false,
1262
+ "normalized": false,
1263
+ "rstrip": false,
1264
+ "single_word": false,
1265
+ "special": true
1266
+ },
1267
+ "128158": {
1268
+ "content": "<|reserved_special_token_149|>",
1269
+ "lstrip": false,
1270
+ "normalized": false,
1271
+ "rstrip": false,
1272
+ "single_word": false,
1273
+ "special": true
1274
+ },
1275
+ "128159": {
1276
+ "content": "<|reserved_special_token_150|>",
1277
+ "lstrip": false,
1278
+ "normalized": false,
1279
+ "rstrip": false,
1280
+ "single_word": false,
1281
+ "special": true
1282
+ },
1283
+ "128160": {
1284
+ "content": "<|reserved_special_token_151|>",
1285
+ "lstrip": false,
1286
+ "normalized": false,
1287
+ "rstrip": false,
1288
+ "single_word": false,
1289
+ "special": true
1290
+ },
1291
+ "128161": {
1292
+ "content": "<|reserved_special_token_152|>",
1293
+ "lstrip": false,
1294
+ "normalized": false,
1295
+ "rstrip": false,
1296
+ "single_word": false,
1297
+ "special": true
1298
+ },
1299
+ "128162": {
1300
+ "content": "<|reserved_special_token_153|>",
1301
+ "lstrip": false,
1302
+ "normalized": false,
1303
+ "rstrip": false,
1304
+ "single_word": false,
1305
+ "special": true
1306
+ },
1307
+ "128163": {
1308
+ "content": "<|reserved_special_token_154|>",
1309
+ "lstrip": false,
1310
+ "normalized": false,
1311
+ "rstrip": false,
1312
+ "single_word": false,
1313
+ "special": true
1314
+ },
1315
+ "128164": {
1316
+ "content": "<|reserved_special_token_155|>",
1317
+ "lstrip": false,
1318
+ "normalized": false,
1319
+ "rstrip": false,
1320
+ "single_word": false,
1321
+ "special": true
1322
+ },
1323
+ "128165": {
1324
+ "content": "<|reserved_special_token_156|>",
1325
+ "lstrip": false,
1326
+ "normalized": false,
1327
+ "rstrip": false,
1328
+ "single_word": false,
1329
+ "special": true
1330
+ },
1331
+ "128166": {
1332
+ "content": "<|reserved_special_token_157|>",
1333
+ "lstrip": false,
1334
+ "normalized": false,
1335
+ "rstrip": false,
1336
+ "single_word": false,
1337
+ "special": true
1338
+ },
1339
+ "128167": {
1340
+ "content": "<|reserved_special_token_158|>",
1341
+ "lstrip": false,
1342
+ "normalized": false,
1343
+ "rstrip": false,
1344
+ "single_word": false,
1345
+ "special": true
1346
+ },
1347
+ "128168": {
1348
+ "content": "<|reserved_special_token_159|>",
1349
+ "lstrip": false,
1350
+ "normalized": false,
1351
+ "rstrip": false,
1352
+ "single_word": false,
1353
+ "special": true
1354
+ },
1355
+ "128169": {
1356
+ "content": "<|reserved_special_token_160|>",
1357
+ "lstrip": false,
1358
+ "normalized": false,
1359
+ "rstrip": false,
1360
+ "single_word": false,
1361
+ "special": true
1362
+ },
1363
+ "128170": {
1364
+ "content": "<|reserved_special_token_161|>",
1365
+ "lstrip": false,
1366
+ "normalized": false,
1367
+ "rstrip": false,
1368
+ "single_word": false,
1369
+ "special": true
1370
+ },
1371
+ "128171": {
1372
+ "content": "<|reserved_special_token_162|>",
1373
+ "lstrip": false,
1374
+ "normalized": false,
1375
+ "rstrip": false,
1376
+ "single_word": false,
1377
+ "special": true
1378
+ },
1379
+ "128172": {
1380
+ "content": "<|reserved_special_token_163|>",
1381
+ "lstrip": false,
1382
+ "normalized": false,
1383
+ "rstrip": false,
1384
+ "single_word": false,
1385
+ "special": true
1386
+ },
1387
+ "128173": {
1388
+ "content": "<|reserved_special_token_164|>",
1389
+ "lstrip": false,
1390
+ "normalized": false,
1391
+ "rstrip": false,
1392
+ "single_word": false,
1393
+ "special": true
1394
+ },
1395
+ "128174": {
1396
+ "content": "<|reserved_special_token_165|>",
1397
+ "lstrip": false,
1398
+ "normalized": false,
1399
+ "rstrip": false,
1400
+ "single_word": false,
1401
+ "special": true
1402
+ },
1403
+ "128175": {
1404
+ "content": "<|reserved_special_token_166|>",
1405
+ "lstrip": false,
1406
+ "normalized": false,
1407
+ "rstrip": false,
1408
+ "single_word": false,
1409
+ "special": true
1410
+ },
1411
+ "128176": {
1412
+ "content": "<|reserved_special_token_167|>",
1413
+ "lstrip": false,
1414
+ "normalized": false,
1415
+ "rstrip": false,
1416
+ "single_word": false,
1417
+ "special": true
1418
+ },
1419
+ "128177": {
1420
+ "content": "<|reserved_special_token_168|>",
1421
+ "lstrip": false,
1422
+ "normalized": false,
1423
+ "rstrip": false,
1424
+ "single_word": false,
1425
+ "special": true
1426
+ },
1427
+ "128178": {
1428
+ "content": "<|reserved_special_token_169|>",
1429
+ "lstrip": false,
1430
+ "normalized": false,
1431
+ "rstrip": false,
1432
+ "single_word": false,
1433
+ "special": true
1434
+ },
1435
+ "128179": {
1436
+ "content": "<|reserved_special_token_170|>",
1437
+ "lstrip": false,
1438
+ "normalized": false,
1439
+ "rstrip": false,
1440
+ "single_word": false,
1441
+ "special": true
1442
+ },
1443
+ "128180": {
1444
+ "content": "<|reserved_special_token_171|>",
1445
+ "lstrip": false,
1446
+ "normalized": false,
1447
+ "rstrip": false,
1448
+ "single_word": false,
1449
+ "special": true
1450
+ },
1451
+ "128181": {
1452
+ "content": "<|reserved_special_token_172|>",
1453
+ "lstrip": false,
1454
+ "normalized": false,
1455
+ "rstrip": false,
1456
+ "single_word": false,
1457
+ "special": true
1458
+ },
1459
+ "128182": {
1460
+ "content": "<|reserved_special_token_173|>",
1461
+ "lstrip": false,
1462
+ "normalized": false,
1463
+ "rstrip": false,
1464
+ "single_word": false,
1465
+ "special": true
1466
+ },
1467
+ "128183": {
1468
+ "content": "<|reserved_special_token_174|>",
1469
+ "lstrip": false,
1470
+ "normalized": false,
1471
+ "rstrip": false,
1472
+ "single_word": false,
1473
+ "special": true
1474
+ },
1475
+ "128184": {
1476
+ "content": "<|reserved_special_token_175|>",
1477
+ "lstrip": false,
1478
+ "normalized": false,
1479
+ "rstrip": false,
1480
+ "single_word": false,
1481
+ "special": true
1482
+ },
1483
+ "128185": {
1484
+ "content": "<|reserved_special_token_176|>",
1485
+ "lstrip": false,
1486
+ "normalized": false,
1487
+ "rstrip": false,
1488
+ "single_word": false,
1489
+ "special": true
1490
+ },
1491
+ "128186": {
1492
+ "content": "<|reserved_special_token_177|>",
1493
+ "lstrip": false,
1494
+ "normalized": false,
1495
+ "rstrip": false,
1496
+ "single_word": false,
1497
+ "special": true
1498
+ },
1499
+ "128187": {
1500
+ "content": "<|reserved_special_token_178|>",
1501
+ "lstrip": false,
1502
+ "normalized": false,
1503
+ "rstrip": false,
1504
+ "single_word": false,
1505
+ "special": true
1506
+ },
1507
+ "128188": {
1508
+ "content": "<|reserved_special_token_179|>",
1509
+ "lstrip": false,
1510
+ "normalized": false,
1511
+ "rstrip": false,
1512
+ "single_word": false,
1513
+ "special": true
1514
+ },
1515
+ "128189": {
1516
+ "content": "<|reserved_special_token_180|>",
1517
+ "lstrip": false,
1518
+ "normalized": false,
1519
+ "rstrip": false,
1520
+ "single_word": false,
1521
+ "special": true
1522
+ },
1523
+ "128190": {
1524
+ "content": "<|reserved_special_token_181|>",
1525
+ "lstrip": false,
1526
+ "normalized": false,
1527
+ "rstrip": false,
1528
+ "single_word": false,
1529
+ "special": true
1530
+ },
1531
+ "128191": {
1532
+ "content": "<|reserved_special_token_182|>",
1533
+ "lstrip": false,
1534
+ "normalized": false,
1535
+ "rstrip": false,
1536
+ "single_word": false,
1537
+ "special": true
1538
+ },
1539
+ "128192": {
1540
+ "content": "<|reserved_special_token_183|>",
1541
+ "lstrip": false,
1542
+ "normalized": false,
1543
+ "rstrip": false,
1544
+ "single_word": false,
1545
+ "special": true
1546
+ },
1547
+ "128193": {
1548
+ "content": "<|reserved_special_token_184|>",
1549
+ "lstrip": false,
1550
+ "normalized": false,
1551
+ "rstrip": false,
1552
+ "single_word": false,
1553
+ "special": true
1554
+ },
1555
+ "128194": {
1556
+ "content": "<|reserved_special_token_185|>",
1557
+ "lstrip": false,
1558
+ "normalized": false,
1559
+ "rstrip": false,
1560
+ "single_word": false,
1561
+ "special": true
1562
+ },
1563
+ "128195": {
1564
+ "content": "<|reserved_special_token_186|>",
1565
+ "lstrip": false,
1566
+ "normalized": false,
1567
+ "rstrip": false,
1568
+ "single_word": false,
1569
+ "special": true
1570
+ },
1571
+ "128196": {
1572
+ "content": "<|reserved_special_token_187|>",
1573
+ "lstrip": false,
1574
+ "normalized": false,
1575
+ "rstrip": false,
1576
+ "single_word": false,
1577
+ "special": true
1578
+ },
1579
+ "128197": {
1580
+ "content": "<|reserved_special_token_188|>",
1581
+ "lstrip": false,
1582
+ "normalized": false,
1583
+ "rstrip": false,
1584
+ "single_word": false,
1585
+ "special": true
1586
+ },
1587
+ "128198": {
1588
+ "content": "<|reserved_special_token_189|>",
1589
+ "lstrip": false,
1590
+ "normalized": false,
1591
+ "rstrip": false,
1592
+ "single_word": false,
1593
+ "special": true
1594
+ },
1595
+ "128199": {
1596
+ "content": "<|reserved_special_token_190|>",
1597
+ "lstrip": false,
1598
+ "normalized": false,
1599
+ "rstrip": false,
1600
+ "single_word": false,
1601
+ "special": true
1602
+ },
1603
+ "128200": {
1604
+ "content": "<|reserved_special_token_191|>",
1605
+ "lstrip": false,
1606
+ "normalized": false,
1607
+ "rstrip": false,
1608
+ "single_word": false,
1609
+ "special": true
1610
+ },
1611
+ "128201": {
1612
+ "content": "<|reserved_special_token_192|>",
1613
+ "lstrip": false,
1614
+ "normalized": false,
1615
+ "rstrip": false,
1616
+ "single_word": false,
1617
+ "special": true
1618
+ },
1619
+ "128202": {
1620
+ "content": "<|reserved_special_token_193|>",
1621
+ "lstrip": false,
1622
+ "normalized": false,
1623
+ "rstrip": false,
1624
+ "single_word": false,
1625
+ "special": true
1626
+ },
1627
+ "128203": {
1628
+ "content": "<|reserved_special_token_194|>",
1629
+ "lstrip": false,
1630
+ "normalized": false,
1631
+ "rstrip": false,
1632
+ "single_word": false,
1633
+ "special": true
1634
+ },
1635
+ "128204": {
1636
+ "content": "<|reserved_special_token_195|>",
1637
+ "lstrip": false,
1638
+ "normalized": false,
1639
+ "rstrip": false,
1640
+ "single_word": false,
1641
+ "special": true
1642
+ },
1643
+ "128205": {
1644
+ "content": "<|reserved_special_token_196|>",
1645
+ "lstrip": false,
1646
+ "normalized": false,
1647
+ "rstrip": false,
1648
+ "single_word": false,
1649
+ "special": true
1650
+ },
1651
+ "128206": {
1652
+ "content": "<|reserved_special_token_197|>",
1653
+ "lstrip": false,
1654
+ "normalized": false,
1655
+ "rstrip": false,
1656
+ "single_word": false,
1657
+ "special": true
1658
+ },
1659
+ "128207": {
1660
+ "content": "<|reserved_special_token_198|>",
1661
+ "lstrip": false,
1662
+ "normalized": false,
1663
+ "rstrip": false,
1664
+ "single_word": false,
1665
+ "special": true
1666
+ },
1667
+ "128208": {
1668
+ "content": "<|reserved_special_token_199|>",
1669
+ "lstrip": false,
1670
+ "normalized": false,
1671
+ "rstrip": false,
1672
+ "single_word": false,
1673
+ "special": true
1674
+ },
1675
+ "128209": {
1676
+ "content": "<|reserved_special_token_200|>",
1677
+ "lstrip": false,
1678
+ "normalized": false,
1679
+ "rstrip": false,
1680
+ "single_word": false,
1681
+ "special": true
1682
+ },
1683
+ "128210": {
1684
+ "content": "<|reserved_special_token_201|>",
1685
+ "lstrip": false,
1686
+ "normalized": false,
1687
+ "rstrip": false,
1688
+ "single_word": false,
1689
+ "special": true
1690
+ },
1691
+ "128211": {
1692
+ "content": "<|reserved_special_token_202|>",
1693
+ "lstrip": false,
1694
+ "normalized": false,
1695
+ "rstrip": false,
1696
+ "single_word": false,
1697
+ "special": true
1698
+ },
1699
+ "128212": {
1700
+ "content": "<|reserved_special_token_203|>",
1701
+ "lstrip": false,
1702
+ "normalized": false,
1703
+ "rstrip": false,
1704
+ "single_word": false,
1705
+ "special": true
1706
+ },
1707
+ "128213": {
1708
+ "content": "<|reserved_special_token_204|>",
1709
+ "lstrip": false,
1710
+ "normalized": false,
1711
+ "rstrip": false,
1712
+ "single_word": false,
1713
+ "special": true
1714
+ },
1715
+ "128214": {
1716
+ "content": "<|reserved_special_token_205|>",
1717
+ "lstrip": false,
1718
+ "normalized": false,
1719
+ "rstrip": false,
1720
+ "single_word": false,
1721
+ "special": true
1722
+ },
1723
+ "128215": {
1724
+ "content": "<|reserved_special_token_206|>",
1725
+ "lstrip": false,
1726
+ "normalized": false,
1727
+ "rstrip": false,
1728
+ "single_word": false,
1729
+ "special": true
1730
+ },
1731
+ "128216": {
1732
+ "content": "<|reserved_special_token_207|>",
1733
+ "lstrip": false,
1734
+ "normalized": false,
1735
+ "rstrip": false,
1736
+ "single_word": false,
1737
+ "special": true
1738
+ },
1739
+ "128217": {
1740
+ "content": "<|reserved_special_token_208|>",
1741
+ "lstrip": false,
1742
+ "normalized": false,
1743
+ "rstrip": false,
1744
+ "single_word": false,
1745
+ "special": true
1746
+ },
1747
+ "128218": {
1748
+ "content": "<|reserved_special_token_209|>",
1749
+ "lstrip": false,
1750
+ "normalized": false,
1751
+ "rstrip": false,
1752
+ "single_word": false,
1753
+ "special": true
1754
+ },
1755
+ "128219": {
1756
+ "content": "<|reserved_special_token_210|>",
1757
+ "lstrip": false,
1758
+ "normalized": false,
1759
+ "rstrip": false,
1760
+ "single_word": false,
1761
+ "special": true
1762
+ },
1763
+ "128220": {
1764
+ "content": "<|reserved_special_token_211|>",
1765
+ "lstrip": false,
1766
+ "normalized": false,
1767
+ "rstrip": false,
1768
+ "single_word": false,
1769
+ "special": true
1770
+ },
1771
+ "128221": {
1772
+ "content": "<|reserved_special_token_212|>",
1773
+ "lstrip": false,
1774
+ "normalized": false,
1775
+ "rstrip": false,
1776
+ "single_word": false,
1777
+ "special": true
1778
+ },
1779
+ "128222": {
1780
+ "content": "<|reserved_special_token_213|>",
1781
+ "lstrip": false,
1782
+ "normalized": false,
1783
+ "rstrip": false,
1784
+ "single_word": false,
1785
+ "special": true
1786
+ },
1787
+ "128223": {
1788
+ "content": "<|reserved_special_token_214|>",
1789
+ "lstrip": false,
1790
+ "normalized": false,
1791
+ "rstrip": false,
1792
+ "single_word": false,
1793
+ "special": true
1794
+ },
1795
+ "128224": {
1796
+ "content": "<|reserved_special_token_215|>",
1797
+ "lstrip": false,
1798
+ "normalized": false,
1799
+ "rstrip": false,
1800
+ "single_word": false,
1801
+ "special": true
1802
+ },
1803
+ "128225": {
1804
+ "content": "<|reserved_special_token_216|>",
1805
+ "lstrip": false,
1806
+ "normalized": false,
1807
+ "rstrip": false,
1808
+ "single_word": false,
1809
+ "special": true
1810
+ },
1811
+ "128226": {
1812
+ "content": "<|reserved_special_token_217|>",
1813
+ "lstrip": false,
1814
+ "normalized": false,
1815
+ "rstrip": false,
1816
+ "single_word": false,
1817
+ "special": true
1818
+ },
1819
+ "128227": {
1820
+ "content": "<|reserved_special_token_218|>",
1821
+ "lstrip": false,
1822
+ "normalized": false,
1823
+ "rstrip": false,
1824
+ "single_word": false,
1825
+ "special": true
1826
+ },
1827
+ "128228": {
1828
+ "content": "<|reserved_special_token_219|>",
1829
+ "lstrip": false,
1830
+ "normalized": false,
1831
+ "rstrip": false,
1832
+ "single_word": false,
1833
+ "special": true
1834
+ },
1835
+ "128229": {
1836
+ "content": "<|reserved_special_token_220|>",
1837
+ "lstrip": false,
1838
+ "normalized": false,
1839
+ "rstrip": false,
1840
+ "single_word": false,
1841
+ "special": true
1842
+ },
1843
+ "128230": {
1844
+ "content": "<|reserved_special_token_221|>",
1845
+ "lstrip": false,
1846
+ "normalized": false,
1847
+ "rstrip": false,
1848
+ "single_word": false,
1849
+ "special": true
1850
+ },
1851
+ "128231": {
1852
+ "content": "<|reserved_special_token_222|>",
1853
+ "lstrip": false,
1854
+ "normalized": false,
1855
+ "rstrip": false,
1856
+ "single_word": false,
1857
+ "special": true
1858
+ },
1859
+ "128232": {
1860
+ "content": "<|reserved_special_token_223|>",
1861
+ "lstrip": false,
1862
+ "normalized": false,
1863
+ "rstrip": false,
1864
+ "single_word": false,
1865
+ "special": true
1866
+ },
1867
+ "128233": {
1868
+ "content": "<|reserved_special_token_224|>",
1869
+ "lstrip": false,
1870
+ "normalized": false,
1871
+ "rstrip": false,
1872
+ "single_word": false,
1873
+ "special": true
1874
+ },
1875
+ "128234": {
1876
+ "content": "<|reserved_special_token_225|>",
1877
+ "lstrip": false,
1878
+ "normalized": false,
1879
+ "rstrip": false,
1880
+ "single_word": false,
1881
+ "special": true
1882
+ },
1883
+ "128235": {
1884
+ "content": "<|reserved_special_token_226|>",
1885
+ "lstrip": false,
1886
+ "normalized": false,
1887
+ "rstrip": false,
1888
+ "single_word": false,
1889
+ "special": true
1890
+ },
1891
+ "128236": {
1892
+ "content": "<|reserved_special_token_227|>",
1893
+ "lstrip": false,
1894
+ "normalized": false,
1895
+ "rstrip": false,
1896
+ "single_word": false,
1897
+ "special": true
1898
+ },
1899
+ "128237": {
1900
+ "content": "<|reserved_special_token_228|>",
1901
+ "lstrip": false,
1902
+ "normalized": false,
1903
+ "rstrip": false,
1904
+ "single_word": false,
1905
+ "special": true
1906
+ },
1907
+ "128238": {
1908
+ "content": "<|reserved_special_token_229|>",
1909
+ "lstrip": false,
1910
+ "normalized": false,
1911
+ "rstrip": false,
1912
+ "single_word": false,
1913
+ "special": true
1914
+ },
1915
+ "128239": {
1916
+ "content": "<|reserved_special_token_230|>",
1917
+ "lstrip": false,
1918
+ "normalized": false,
1919
+ "rstrip": false,
1920
+ "single_word": false,
1921
+ "special": true
1922
+ },
1923
+ "128240": {
1924
+ "content": "<|reserved_special_token_231|>",
1925
+ "lstrip": false,
1926
+ "normalized": false,
1927
+ "rstrip": false,
1928
+ "single_word": false,
1929
+ "special": true
1930
+ },
1931
+ "128241": {
1932
+ "content": "<|reserved_special_token_232|>",
1933
+ "lstrip": false,
1934
+ "normalized": false,
1935
+ "rstrip": false,
1936
+ "single_word": false,
1937
+ "special": true
1938
+ },
1939
+ "128242": {
1940
+ "content": "<|reserved_special_token_233|>",
1941
+ "lstrip": false,
1942
+ "normalized": false,
1943
+ "rstrip": false,
1944
+ "single_word": false,
1945
+ "special": true
1946
+ },
1947
+ "128243": {
1948
+ "content": "<|reserved_special_token_234|>",
1949
+ "lstrip": false,
1950
+ "normalized": false,
1951
+ "rstrip": false,
1952
+ "single_word": false,
1953
+ "special": true
1954
+ },
1955
+ "128244": {
1956
+ "content": "<|reserved_special_token_235|>",
1957
+ "lstrip": false,
1958
+ "normalized": false,
1959
+ "rstrip": false,
1960
+ "single_word": false,
1961
+ "special": true
1962
+ },
1963
+ "128245": {
1964
+ "content": "<|reserved_special_token_236|>",
1965
+ "lstrip": false,
1966
+ "normalized": false,
1967
+ "rstrip": false,
1968
+ "single_word": false,
1969
+ "special": true
1970
+ },
1971
+ "128246": {
1972
+ "content": "<|reserved_special_token_237|>",
1973
+ "lstrip": false,
1974
+ "normalized": false,
1975
+ "rstrip": false,
1976
+ "single_word": false,
1977
+ "special": true
1978
+ },
1979
+ "128247": {
1980
+ "content": "<|reserved_special_token_238|>",
1981
+ "lstrip": false,
1982
+ "normalized": false,
1983
+ "rstrip": false,
1984
+ "single_word": false,
1985
+ "special": true
1986
+ },
1987
+ "128248": {
1988
+ "content": "<|reserved_special_token_239|>",
1989
+ "lstrip": false,
1990
+ "normalized": false,
1991
+ "rstrip": false,
1992
+ "single_word": false,
1993
+ "special": true
1994
+ },
1995
+ "128249": {
1996
+ "content": "<|reserved_special_token_240|>",
1997
+ "lstrip": false,
1998
+ "normalized": false,
1999
+ "rstrip": false,
2000
+ "single_word": false,
2001
+ "special": true
2002
+ },
2003
+ "128250": {
2004
+ "content": "<|reserved_special_token_241|>",
2005
+ "lstrip": false,
2006
+ "normalized": false,
2007
+ "rstrip": false,
2008
+ "single_word": false,
2009
+ "special": true
2010
+ },
2011
+ "128251": {
2012
+ "content": "<|reserved_special_token_242|>",
2013
+ "lstrip": false,
2014
+ "normalized": false,
2015
+ "rstrip": false,
2016
+ "single_word": false,
2017
+ "special": true
2018
+ },
2019
+ "128252": {
2020
+ "content": "<|reserved_special_token_243|>",
2021
+ "lstrip": false,
2022
+ "normalized": false,
2023
+ "rstrip": false,
2024
+ "single_word": false,
2025
+ "special": true
2026
+ },
2027
+ "128253": {
2028
+ "content": "<|reserved_special_token_244|>",
2029
+ "lstrip": false,
2030
+ "normalized": false,
2031
+ "rstrip": false,
2032
+ "single_word": false,
2033
+ "special": true
2034
+ },
2035
+ "128254": {
2036
+ "content": "<|reserved_special_token_245|>",
2037
+ "lstrip": false,
2038
+ "normalized": false,
2039
+ "rstrip": false,
2040
+ "single_word": false,
2041
+ "special": true
2042
+ },
2043
+ "128255": {
2044
+ "content": "<|reserved_special_token_246|>",
2045
+ "lstrip": false,
2046
+ "normalized": false,
2047
+ "rstrip": false,
2048
+ "single_word": false,
2049
+ "special": true
2050
+ },
2051
+ "128256": {
2052
+ "content": "<|image|>",
2053
+ "lstrip": false,
2054
+ "normalized": false,
2055
+ "rstrip": false,
2056
+ "single_word": false,
2057
+ "special": true
2058
+ }
2059
+ },
2060
+ "bos_token": "<|begin_of_text|>",
2061
+ "chat_template": "{{- bos_token }}\n{%- if custom_tools is defined %}\n {%- set tools = custom_tools %}\n{%- endif %}\n{%- if not tools_in_user_message is defined %}\n {%- set tools_in_user_message = true %}\n{%- endif %}\n{%- if not date_string is defined %}\n {%- if strftime_now is defined %}\n {%- set date_string = strftime_now(\"%d %b %Y\") %}\n {%- else %}\n {%- set date_string = \"26 Jul 2024\" %}\n {%- endif %}\n{%- endif %}\n{%- if not tools is defined %}\n {%- set tools = none %}\n{%- endif %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0]['role'] == 'system' %}\n {%- set system_message = messages[0]['content']|trim %}\n {%- set messages = messages[1:] %}\n{%- else %}\n {%- set system_message = \"\" %}\n{%- endif %}\n\n{#- Find out if there are any images #}\n{% set image_ns = namespace(has_images=false) %} \n{%- for message in messages %}\n {%- for content in message['content'] %}\n {%- if content['type'] == 'image' %}\n {%- set image_ns.has_images = true %}\n {%- endif %}\n {%- endfor %}\n{%- endfor %}\n\n{#- Error out if there are images and system message #}\n{%- if image_ns.has_images and not system_message == \"\" %}\n {{- raise_exception(\"Prompting with images is incompatible with system messages.\") }}\n{%- endif %}\n\n{#- System message if there are no images #}\n{%- if not image_ns.has_images %}\n {{- \"<|start_header_id|>system<|end_header_id|>\\n\\n\" }}\n {%- if tools is not none %}\n {{- \"Environment: ipython\\n\" }}\n {%- endif %}\n {{- \"Cutting Knowledge Date: December 2023\\n\" }}\n {{- \"Today Date: \" + date_string + \"\\n\\n\" }}\n {%- if tools is not none and not tools_in_user_message %}\n {{- \"You have access to the following functions. To call a function, please respond with JSON for a function call.\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\\n\\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\\n\\n\" }}\n {%- endfor %}\n {%- endif %}\n {{- system_message }}\n {{- \"<|eot_id|>\" }}\n{%- endif %}\n\n{#- Custom tools are passed in a user message with some extra guidance #}\n{%- if tools_in_user_message and not tools is none %}\n {#- Extract the first user message so we can plug it in here #}\n {%- if messages | length != 0 %}\n {%- set first_user_message = messages[0]['content']|trim %}\n {%- set messages = messages[1:] %}\n {%- else %}\n {{- raise_exception(\"Cannot put tools in the first user message when there's no first user message!\") }}\n{%- endif %}\n {{- '<|start_header_id|>user<|end_header_id|>\\n\\n' -}}\n {{- \"Given the following functions, please respond with a JSON for a function call \" }}\n {{- \"with its proper arguments that best answers the given prompt.\\n\\n\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\\n\\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\\n\\n\" }}\n {%- endfor %}\n {{- first_user_message + \"<|eot_id|>\"}}\n{%- endif %}\n\n{%- for message in messages %}\n {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}\n {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n' }}\n {%- if message['content'] is string %}\n {{- message['content'] }}\n {%- else %}\n {%- for content in message['content'] %}\n {%- if content['type'] == 'image' %}\n {{- '<|image|>' }}\n {%- elif content['type'] == 'text' %}\n {{- content['text'] }}\n {%- endif %}\n {%- endfor %}\n {%- endif %}\n {{- '<|eot_id|>' }}\n {%- elif 'tool_calls' in message %}\n {%- if not message.tool_calls|length == 1 %}\n {{- raise_exception(\"This model only supports single tool-calls at once!\") }}\n {%- endif %}\n {%- set tool_call = message.tool_calls[0].function %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' -}}\n {{- '{\"name\": \"' + tool_call.name + '\", ' }}\n {{- '\"parameters\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- \"}\" }}\n {{- \"<|eot_id|>\" }}\n {%- elif message.role == \"tool\" or message.role == \"ipython\" %}\n {{- \"<|start_header_id|>ipython<|end_header_id|>\\n\\n\" }}\n {%- if message.content is mapping or message.content is iterable %}\n {{- message.content | tojson }}\n {%- else %}\n {{- message.content }}\n {%- endif %}\n {{- \"<|eot_id|>\" }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}\n",
2062
+ "clean_up_tokenization_spaces": true,
2063
+ "eos_token": "<|eot_id|>",
2064
+ "model_input_names": [
2065
+ "input_ids",
2066
+ "attention_mask"
2067
+ ],
2068
+ "model_max_length": 131072,
2069
+ "pad_token": "<|finetune_right_pad_id|>",
2070
+ "tokenizer_class": "PreTrainedTokenizerFast"
2071
+ }