TheYuriLover commited on
Commit
fad9b45
1 Parent(s): bd3d231

Upload 11 files

Browse files
README.md CHANGED
@@ -1,14 +1,617 @@
1
  ---
2
  license: other
 
 
3
  ---
4
- This is the gptq 4bit quantization of this model: https://huggingface.co/jondurbin/airoboros-13b-gpt4
5
 
6
- This quantization was made by using this repository: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton
7
 
8
- And I used the triton branch with all the gptq implementations available (true_sequential + act_order + groupsize 128)
9
 
10
- CUDA_VISIBLE_DEVICES=0 python llama.py ./airoboros-13b-gpt4-TRITON c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors airoboros-13b-gpt4-128g-ts-ao.safetensors
11
 
12
- <div style="width: 100%;">
13
- <img src="https://i.imgur.com/wU27h15.png" alt="TheBlokeAI" style="width: 50%; min-width: 400px; display: block; margin: auto;">
14
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ datasets:
4
+ - jondurbin/airoboros-gpt4
5
  ---
 
6
 
7
+ ## Overview
8
 
9
+ This is a fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
10
 
11
+ The context size has been increased to 4096.
12
 
13
+ The dataset used to fine-tune this model is available [here](https://huggingface.co/airoboros-gpt4), with a specific focus on:
14
+ - trivia
15
+ - math/reasoning (although it still sucks)
16
+ - coding
17
+ - multiple choice and fill-in-the-blank
18
+ - context-obedient question answering
19
+ - theory of mind
20
+ - misc/general
21
+
22
+ This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
23
+ ```
24
+ USER:
25
+ [prompt]
26
+
27
+ <\s>
28
+ ASSISTANT:
29
+ ```
30
+
31
+ The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering.
32
+
33
+ ### Usage
34
+
35
+ The easiest way to get started is to use my fork of FastChat, which is mostly the same but allows for the increased context length and adds support for multi-line inputs:
36
+ ```
37
+ pip install git+https://github.com/jondurbin/FastChat
38
+ ```
39
+
40
+ Then, you can invoke it like so (after downloading the model):
41
+ ```
42
+ python -m fastchat.serve.cli
43
+ --model-path airoboros-13b-gpt4 \
44
+ --temperature 0.5 \
45
+ --max-new-tokens 4096 \
46
+ --context-length 4096 \
47
+ --conv-template vicuna_v1.1 \
48
+ --no-history
49
+ ```
50
+
51
+ ### Context obedient question answering
52
+
53
+ By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
54
+
55
+ The format for a closed-context prompt is as follows:
56
+ ```
57
+ BEGININPUT
58
+ BEGINCONTEXT
59
+ url: https://some.web.site/123
60
+ date: 2023-06-01
61
+ ... other metdata ...
62
+ ENDCONTEXT
63
+ [insert your text blocks here]
64
+ ENDINPUT
65
+ [add as many other blocks, in the exact same format]
66
+ BEGININSTRUCTION
67
+ [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
68
+ ENDINSTRUCTION
69
+ ```
70
+
71
+ I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
72
+ - `BEGININPUT` - denotes a new input block
73
+ - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
74
+ - `ENDCONTEXT` - denotes the end of the metadata block for the current input
75
+ - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
76
+ - `ENDINPUT` - denotes the end of the current input block
77
+ - [repeat as many input blocks in this format as you want]
78
+ - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
79
+ - [instruction(s)]
80
+ - `ENDINSTRUCTION` - denotes the end of instruction set
81
+
82
+ It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
83
+
84
+ Here's a trivial, but important example to prove the point:
85
+ ```
86
+ BEGININPUT
87
+ BEGINCONTEXT
88
+ date: 2021-01-01
89
+ url: https://web.site/123
90
+ ENDCONTEXT
91
+ In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
92
+ ENDINPUT
93
+ BEGININSTRUCTION
94
+ What color are bluberries? Source?
95
+ ENDINSTRUCTION
96
+ ```
97
+
98
+ And the response:
99
+ ```
100
+ Blueberries are now green.
101
+ Source:
102
+ date: 2021-01-01
103
+ url: https://web.site/123
104
+ ```
105
+
106
+ The prompt itself should can be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described (although it seems to understand without it as well, so :shrug:). For example, the above prompt, as a standalone, would be:
107
+
108
+ ```
109
+ USER:
110
+ BEGININPUT
111
+ BEGINCONTEXT
112
+ date: 2021-01-01
113
+ url: https://web.site/123
114
+ ENDCONTEXT
115
+ In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
116
+ ENDINPUT
117
+ BEGININSTRUCTION
118
+ What color are bluberries? Source?
119
+ ENDINSTRUCTION
120
+ <\s>
121
+
122
+ ASSISTANT:
123
+ ```
124
+
125
+ <details>
126
+ <summary>A more elaborate example, with a rewrite of the Michigan Wikipedia article to be fake data.</summary>
127
+
128
+ Prompt (not including vicuna format which would be needed):
129
+ ```
130
+ BEGININPUT
131
+ BEGINCONTEXT
132
+ date: 2092-02-01
133
+ link: https://newwikisite.com/Michigan
134
+ contributors: Foolo Barslette
135
+ ENDCONTEXT
136
+ Michigan (/ˈmɪʃɪɡən/ (listen)) is a state situated within the Great Lakes region of the upper Midwestern United States.
137
+ It shares land borders with Prolaska to the southwest, and Intoria and Ohiondiana to the south, while Lakes Suprema, Michigonda, Huronia, and Erona connect it to the states of Minnestara and Illinota, and the Canadian province of Ontaregon.
138
+ With a population of nearly 15.35 million and an area of nearly 142,000 sq mi (367,000 km2), Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River.
139
+ Its capital is Chaslany, and its most populous city is Trentroit.
140
+ Metro Trentroit is one of the nation's most densely populated and largest metropolitan economies.
141
+ The state's name originates from a Latinized variant of the original Ojibwe word ᒥᓯᑲᒥ (mishigami), signifying "grand water" or "grand lake".
142
+
143
+ Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area.
144
+ The Upper Peninsula (often referred to as "the U.P.") is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda.
145
+ The McKendrick Bridge unites the peninsulas.
146
+ Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius.
147
+
148
+ It also possesses 84,350 inland lakes and ponds.
149
+ Michigan has the third-largest water area among all states, falling behind only Alaska and Florida.
150
+
151
+ The area was initially inhabited by a succession of Native American tribes spanning millennia.
152
+ In the 17th century, Spanish explorers claimed the region as part of the New Spain colony when it was primarily inhabited by indigenous peoples.
153
+ Spanish and Canadian traders and settlers, Métis, and others migrated to the area, settling mainly along the waterways.
154
+ After Spain's defeat in the Spanish and Indian War in 1762, the region came under British rule.
155
+
156
+ The British conceded the territory to the newly independent United States after their defeat in the American Revolutionary War.
157
+
158
+ The area was part of the larger Midwest Territory until 1800, when western Michigan became part of the Illinota Territory.
159
+ Michigan Territory was formed in 1805, but portions of the northern border with Canada weren't agreed upon until after the War of 1812.
160
+
161
+ Michigan joined the Union in 1837 as the 26th state, a free one. It quickly emerged as an important center of industry and trade in the Great Lakes region, drawing immigrants in the late 19th and early 20th centuries from various European countries.
162
+
163
+ Immigrants from Denmark, Armenia, and the Czech Republic were especially numerous.
164
+ Migration from the Ozarks and of Black Southerners as part of the Great Migration saw an increase in the 1930s, with many settling in Metro Trentroit.
165
+
166
+ While Michigan has evolved a diverse economy, it became widely recognized in the early 20th century as the hub of the U.S. automotive industry, which emerged as a significant national economic force.
167
+ The country's three major automobile companies are headquartered in Metro Trentroit.
168
+ Once exploited for logging and mining, today the sparsely populated Upper Peninsula is significant for tourism due to its wealth of natural resources.
169
+ The Lower Peninsula is a hub for manufacturing, forestry, agriculture, services, and high-tech industry.
170
+ ENDINPUT
171
+ BEGININSTRUCTION
172
+ What is Michigan's capital?
173
+ Give me a JSON object with 3 facts from Foolo's article.
174
+ What are the states that border Michigan?
175
+ Reference?
176
+ ```
177
+
178
+ Response:
179
+ ```
180
+ Michigan's capital is Chaslany.
181
+
182
+ Here's a JSON object with 3 facts from Foolo's article:
183
+ {
184
+ "facts": [
185
+ {
186
+ "fact_no": 1,
187
+ "description": "Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River."
188
+ },
189
+ {
190
+ "fact_no": 2,
191
+ "description": "Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as \"the U.P.\" is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda."
192
+ },
193
+ {
194
+ "fact_no": 3,
195
+ "description": "Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida."
196
+ }
197
+ ]
198
+ }
199
+
200
+ The states that border Michigan are Prolaska, Intoria, and Ohiondiana, as well as Lakes Suprema, Michigonda, Huronia, and Erona.
201
+
202
+ Reference: https://newwikisite.com/Michigan
203
+ ```
204
+
205
+ </details>
206
+
207
+ <details>
208
+ <summary>Here's another example, which extracts from multiple input blocks and connects the two.</summary>
209
+
210
+ Prompt:
211
+ ```
212
+ BEGININPUT
213
+ BEGINCONTEXT
214
+ name: John Doe
215
+ date: June 3, 2023
216
+ ticket number: JIRA-12345
217
+ ENDCONTEXT
218
+ Summary:Search results missing random items
219
+
220
+ Description:
221
+ I encountered a bug while performing a search within the application.
222
+ It appears that the search results are missing random items that should be displayed.
223
+ This issue is affecting the accuracy and completeness of the search functionality.
224
+
225
+ Steps to Reproduce:
226
+ 1. Log in to the application.
227
+ 2. Navigate to the search feature.
228
+ 3. Enter a search query that should return multiple results.
229
+ 4. Observe the displayed search results.
230
+
231
+ Expected Results:
232
+ The search results should include all relevant items matching the search query.
233
+
234
+ Actual Results:
235
+ The search results occasionally exclude random items that should be displayed.
236
+ It seems that the missing items do not follow a specific pattern or criteria.
237
+ Upon multiple search attempts, different items are omitted each time, making it difficult to predict which items will be missing.
238
+ ENDINPUT
239
+
240
+ BEGININPUT
241
+ BEGINCONTEXT
242
+ date: 2023-06-05
243
+ user: Jack Johnson
244
+ pr: 23441
245
+ ENDCONTEXT
246
+ This pull request closes bug report JIRA-12345.
247
+
248
+ The issue was that the pagination code was using page size plus one instead of page size.
249
+ ENDINPUT
250
+
251
+ BEGININSTRUCTION
252
+ Do we have any bug reports related to search results? If so, were they fixed? Source?
253
+ ENDINSTRUCTION
254
+ ```
255
+ Response:
256
+ ```
257
+ Yes, there is a bug report related to search results. The issue was fixed in pull request 23441 dated 2023-06-05. The user associated with the pull request is Jack Johnson.
258
+
259
+ Reference(s):
260
+ date: 2023-06-05
261
+ user: Jack Johnson
262
+ pr: 23441
263
+ ```
264
+
265
+ </details>
266
+
267
+ ### Other stuff
268
+
269
+ #### Better coding
270
+
271
+ Example:
272
+ ```
273
+ Create a python application with the following requirements:
274
+ - Asyncio FastAPI webserver
275
+ - ping endpoint that returns the current date in JSON format
276
+ - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
277
+ ```
278
+
279
+ <details>
280
+ <summary>Response (the output was markdown, so this is just copy paste)</summary>
281
+
282
+ To create a python application with the given requirements, you can follow these steps:
283
+
284
+ 1. Install the required packages:
285
+ ```bash
286
+ pip install asyncio fastapi uvicorn
287
+ ```
288
+
289
+ 2. Create a new file called `main.py` and add the following code:
290
+
291
+ ```python
292
+ import asyncio
293
+ from fastapi import FastAPI, HTTPException
294
+ from pydantic import BaseModel
295
+ from typing import List
296
+ from uvicorn.request import Request
297
+
298
+ app = FastAPI()
299
+
300
+
301
+ class FileModel(BaseModel):
302
+ id: str
303
+ file: str
304
+
305
+
306
+ @app.post("/upload", response_model=FileModel)
307
+ async def upload_file(file_model: FileModel):
308
+ # Check if the file is valid
309
+ if not file_model.file:
310
+ raise HTTPException(status_code=400, detail="File must be provided")
311
+
312
+ # Calculate the sha256 checksum of the file
313
+ sha256_checksum = await calculate_sha256(file_model.file)
314
+
315
+ # Insert the file into the database
316
+ await insert_into_db(file_model.id, sha256_checksum)
317
+
318
+ return {"sha256_checksum": sha256_checksum}
319
+
320
+
321
+ async def calculate_sha256(file_path: str) -> str:
322
+ # Implement the sha256 calculation logic here
323
+ pass
324
+
325
+
326
+ async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
327
+ # Implement the database insertion logic here
328
+ pass
329
+
330
+
331
+ async def get_current_date():
332
+ return {"date": datetime.now().isoformat()}
333
+
334
+
335
+ async def main():
336
+ # Start the FastAPI webserver
337
+ app.run(debug=True)
338
+
339
+
340
+ if __name__ == "__main__":
341
+ await main()
342
+ ```
343
+
344
+ 3. Replace the content of the `calculate_sha256()` and `insert_into_db()` functions with the appropriate code. For example:
345
+
346
+ ```python
347
+ async def calculate_sha256(file_path: str) -> str:
348
+ import hashlib
349
+ file_path = Path(file_path).read_text()
350
+ hash_obj = hashlib.sha256()
351
+ hash_obj.update(file_path)
352
+ sha256_checksum = hash_obj.hexdigest()
353
+ return sha256_checksum
354
+
355
+
356
+ async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
357
+ import psycopg2
358
+ conn = psycopg2.connect(database="your_database", user="your_user", password="your_password")
359
+ cursor = conn.cursor()
360
+
361
+ query = f"INSERT INTO your_table (id, sha256_checksum) VALUES (%s, %s);"
362
+ cursor.execute(query, (file_id, sha256_checksum))
363
+ conn.commit()
364
+
365
+ cursor.close()
366
+ conn.close()
367
+ ```
368
+
369
+ 4. Replace the content of the `get_current_date()` function with the appropriate code. For example:
370
+
371
+ ```python
372
+ async def get_current_date():
373
+ return {"date": datetime.now().isoformat()}
374
+ ```
375
+
376
+ 5. Run the application:
377
+
378
+ ```bash
379
+ uvicorn main:app --reload
380
+ ```
381
+
382
+ Now, your FastAPI webserver has a ping endpoint that returns the current date in JSON format, and an upload endpoint that calculates the file's sha256 checksum and checks postgres to deduplicate.
383
+
384
+ </details>
385
+
386
+ <details>
387
+ <summary>Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.</summary>
388
+
389
+ Here's a simple implementation of a multi-threaded TCP server in C that accepts "GET [key]" and "SET [key] [value]" inputs, and uses a binary tree to store the input values.
390
+
391
+ ```c
392
+ #include <stdio.h>
393
+ #include <stdlib.h>
394
+ #include <sys/socket.h>
395
+ #include <netinet/in.h>
396
+ #include <unistd.h>
397
+ #include <string.h>
398
+ #include <pthread.h>
399
+
400
+ #define MAX_KEY_LEN 256
401
+ #define MAX_VALUE_LEN 256
402
+
403
+ typedef struct Node {
404
+ char key[MAX_KEY_LEN];
405
+ char value[MAX_VALUE_LEN];
406
+ struct Node* left;
407
+ struct Node* right;
408
+ } TreeNode;
409
+
410
+ TreeNode* tree = NULL;
411
+
412
+ void insert(char* key, char* value) {
413
+ TreeNode* parent = (TreeNode*)malloc(sizeof(TreeNode));
414
+ if (!parent) {
415
+ perror("malloc failed");
416
+ exit(1);
417
+ }
418
+
419
+ strcpy(parent->key, key);
420
+ strcpy(parent->value, value);
421
+
422
+ parent->left = parent->right = NULL;
423
+
424
+ if (!tree) {
425
+ tree = parent;
426
+ } else {
427
+ insert(parent->key, parent->value);
428
+ }
429
+ }
430
+
431
+ char* search(char* key) {
432
+ TreeNode* current = tree;
433
+ while (current != NULL) {
434
+ if (strcmp(key, current->key) == 0) {
435
+ return current->value;
436
+ }
437
+ current = current->right;
438
+ }
439
+
440
+ return NULL;
441
+ }
442
+
443
+ void print_tree() {
444
+ TreeNode* current = tree;
445
+ while (current != NULL) {
446
+ printf("%s = %s\n", current->key, current->value);
447
+ current = current->right;
448
+ }
449
+ }
450
+
451
+ void* accept_client(void* args) {
452
+ int client_socket = *(int*)args;
453
+ char buffer[1024];
454
+
455
+ while (1) {
456
+ ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
457
+ if (bytes_received <= 0) {
458
+ close(client_socket);
459
+ break;
460
+ }
461
+
462
+ char* input = buffer;
463
+ char* input_token;
464
+
465
+ while ((input_token = strtok(input, " \n")) != NULL) {
466
+ switch (input_token[0]) {
467
+ case 'G':
468
+ if (!search(input_token)) {
469
+ perror("Value not found");
470
+ exit(1);
471
+ }
472
+ printf("%s = %s\n", input_token, search(input_token));
473
+ break;
474
+
475
+ case 'S':
476
+ if (!strcmp(input_token, "SET")) {
477
+ char* key = strtok(NULL, " \n");
478
+ char* value = strtok(NULL, " \n");
479
+ if (!key || !value) {
480
+ perror("Invalid input");
481
+ exit(1);
482
+ }
483
+
484
+ insert(key, value);
485
+ printf("%s set %s to %s\n", input_token, key, value);
486
+ } else {
487
+ perror("Invalid command");
488
+ exit(1);
489
+ }
490
+ break;
491
+
492
+ default:
493
+ perror("Invalid input");
494
+ exit(1);
495
+ }
496
+
497
+ input = strtok(NULL, " \n");
498
+ }
499
+ }
500
+
501
+ return NULL;
502
+ }
503
+
504
+ int main() {
505
+ int server_socket = socket(AF_INET, SOCK_STREAM, 0);
506
+ if (server_socket < 0) {
507
+ perror("socket failed");
508
+ exit(1);
509
+ }
510
+
511
+ struct sockaddr_in server_addr;
512
+ memset(&server_addr, 0, sizeof(server_addr));
513
+ server_addr.sin_family = AF_INET;
514
+ server_addr.sin_port = htons(8080);
515
+ server_addr.sin_addr.s_addr = INADDR_ANY;
516
+
517
+ if (bind(server_socket, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) {
518
+ perror("bind failed");
519
+ exit(1);
520
+ }
521
+
522
+ if (listen(server_socket, 5) < 0) {
523
+ perror("listen failed");
524
+ exit(1);
525
+ }
526
+
527
+ pthread_t accept_thread;
528
+ pthread_create(&accept_thread, NULL, accept_client, &server_socket);
529
+
530
+ char* client_input;
531
+ int client_socket = accept(server_socket, (struct sockaddr*)NULL, NULL);
532
+ if (client_socket < 0) {
533
+ perror("accept failed");
534
+ exit(1);
535
+ }
536
+
537
+ while (1) {
538
+ sleep(1);
539
+
540
+ char buffer[1024];
541
+ ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
542
+ if (bytes_received <= 0) {
543
+ close(client_socket);
544
+ break;
545
+ }
546
+
547
+ client_input = buffer;
548
+ parse_input(client_input);
549
+ }
550
+
551
+ close(client_socket);
552
+ pthread_join(accept_thread, NULL);
553
+
554
+ return 0;
555
+ }
556
+ ```
557
+
558
+ To compile and run this code, save it to a file named `multi_threaded_tcp_server.c`, then use the following commands:
559
+
560
+ ```sh
561
+ gcc -o multi_threaded_tcp_server multi_threaded_tcp_server.c
562
+ ./multi_threaded_tcp_server
563
+ ```
564
+
565
+ This implementation creates a simple binary tree to store the input values. The accept() function creates a new thread for each client connection, which handles the incoming data and forwards it to the main thread. The main thread parses the input string and handles SET and GET commands.
566
+
567
+ Note that this code is not production-ready and should be improved with proper error handling, security checks, and a more efficient data structure for the binary tree.
568
+
569
+ </details>
570
+
571
+ #### Word games / trivia
572
+
573
+ ```
574
+ USER: This anagram of "silent" is what you do when paying attention and hearing someone.
575
+ ASSISTANT: listen
576
+ ```
577
+
578
+ ```
579
+ USER: Give me a list of 7 words that start with EN
580
+ ASSISTANT: 1. Endlessly
581
+ 2. Enthusiastically
582
+ 3. Enormously
583
+ 4. Enveloping
584
+ 5. Enduring
585
+ 6. Enlightening
586
+ 7. Encouraging
587
+ ```
588
+
589
+ #### Multiple choice
590
+
591
+ ```
592
+ USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
593
+
594
+ A) Mercury
595
+ B) Venus
596
+ C) Pluto
597
+ D) Jupiter
598
+ ASSISTANT: [C, Pluto]
599
+ ```
600
+
601
+ #### Writing
602
+
603
+ <details>
604
+ <summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
605
+
606
+ Response:
607
+ ```
608
+ ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
609
+
610
+ Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
611
+
612
+ No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
613
+
614
+ So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
615
+ ```
616
+
617
+ </details>
airoboros-13b-gpt4-128g-ts-ao.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:142effee1da713a3bb422f8aef7853257dd235dfdd02f2e137de8d38c606e3ef
3
+ size 7255159218
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "airoboros-13b-gpt4",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 0,
7
+ "eos_token_id": 1,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 5120,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 13824,
12
+ "max_position_embeddings": 4096,
13
+ "max_seq_len": 4096,
14
+ "max_sequence_length": 4096,
15
+ "model_type": "llama",
16
+ "num_attention_heads": 40,
17
+ "num_hidden_layers": 40,
18
+ "pad_token_id": -1,
19
+ "rms_norm_eps": 1e-06,
20
+ "tie_word_embeddings": false,
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.29.2",
23
+ "use_cache": true,
24
+ "vocab_size": 32000
25
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 1,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.29.2"
7
+ }
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,410 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 52063467520
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "pytorch_model-00006-of-00006.bin",
7
+ "model.embed_tokens.weight": "pytorch_model-00001-of-00006.bin",
8
+ "model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
9
+ "model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
10
+ "model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
11
+ "model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
12
+ "model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
13
+ "model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
14
+ "model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
15
+ "model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
16
+ "model.layers.0.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
17
+ "model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
18
+ "model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
19
+ "model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
20
+ "model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
21
+ "model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
22
+ "model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
23
+ "model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
24
+ "model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
25
+ "model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
26
+ "model.layers.1.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
27
+ "model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
28
+ "model.layers.10.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
29
+ "model.layers.10.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
30
+ "model.layers.10.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
31
+ "model.layers.10.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
32
+ "model.layers.10.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
33
+ "model.layers.10.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
34
+ "model.layers.10.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
35
+ "model.layers.10.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
36
+ "model.layers.10.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
37
+ "model.layers.10.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
38
+ "model.layers.11.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
39
+ "model.layers.11.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
40
+ "model.layers.11.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
41
+ "model.layers.11.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
42
+ "model.layers.11.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
43
+ "model.layers.11.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
44
+ "model.layers.11.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
45
+ "model.layers.11.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
46
+ "model.layers.11.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
47
+ "model.layers.11.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
48
+ "model.layers.12.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
49
+ "model.layers.12.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
50
+ "model.layers.12.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
51
+ "model.layers.12.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
52
+ "model.layers.12.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
53
+ "model.layers.12.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
54
+ "model.layers.12.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
55
+ "model.layers.12.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
56
+ "model.layers.12.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
57
+ "model.layers.12.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
58
+ "model.layers.13.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
59
+ "model.layers.13.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
60
+ "model.layers.13.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
61
+ "model.layers.13.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
62
+ "model.layers.13.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
63
+ "model.layers.13.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
64
+ "model.layers.13.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
65
+ "model.layers.13.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
66
+ "model.layers.13.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
67
+ "model.layers.13.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
68
+ "model.layers.14.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
69
+ "model.layers.14.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
70
+ "model.layers.14.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
71
+ "model.layers.14.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
72
+ "model.layers.14.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
73
+ "model.layers.14.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
74
+ "model.layers.14.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
75
+ "model.layers.14.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
76
+ "model.layers.14.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
77
+ "model.layers.14.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
78
+ "model.layers.15.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
79
+ "model.layers.15.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
80
+ "model.layers.15.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
81
+ "model.layers.15.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
82
+ "model.layers.15.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
83
+ "model.layers.15.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
84
+ "model.layers.15.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
85
+ "model.layers.15.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
86
+ "model.layers.15.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
87
+ "model.layers.15.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
88
+ "model.layers.16.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
89
+ "model.layers.16.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
90
+ "model.layers.16.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
91
+ "model.layers.16.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
92
+ "model.layers.16.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
93
+ "model.layers.16.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
94
+ "model.layers.16.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
95
+ "model.layers.16.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
96
+ "model.layers.16.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
97
+ "model.layers.16.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
98
+ "model.layers.17.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
99
+ "model.layers.17.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
100
+ "model.layers.17.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
101
+ "model.layers.17.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
102
+ "model.layers.17.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
103
+ "model.layers.17.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
104
+ "model.layers.17.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
105
+ "model.layers.17.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
106
+ "model.layers.17.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
107
+ "model.layers.17.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
108
+ "model.layers.18.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
109
+ "model.layers.18.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
110
+ "model.layers.18.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
111
+ "model.layers.18.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
112
+ "model.layers.18.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
113
+ "model.layers.18.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
114
+ "model.layers.18.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
115
+ "model.layers.18.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
116
+ "model.layers.18.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
117
+ "model.layers.18.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
118
+ "model.layers.19.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
119
+ "model.layers.19.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
120
+ "model.layers.19.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
121
+ "model.layers.19.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
122
+ "model.layers.19.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
123
+ "model.layers.19.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
124
+ "model.layers.19.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
125
+ "model.layers.19.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
126
+ "model.layers.19.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
127
+ "model.layers.19.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
128
+ "model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
129
+ "model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
130
+ "model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
131
+ "model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
132
+ "model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
133
+ "model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
134
+ "model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
135
+ "model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
136
+ "model.layers.2.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
137
+ "model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
138
+ "model.layers.20.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
139
+ "model.layers.20.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
140
+ "model.layers.20.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
141
+ "model.layers.20.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
142
+ "model.layers.20.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
143
+ "model.layers.20.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
144
+ "model.layers.20.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
145
+ "model.layers.20.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
146
+ "model.layers.20.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
147
+ "model.layers.20.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
148
+ "model.layers.21.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
149
+ "model.layers.21.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
150
+ "model.layers.21.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
151
+ "model.layers.21.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
152
+ "model.layers.21.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
153
+ "model.layers.21.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
154
+ "model.layers.21.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
155
+ "model.layers.21.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
156
+ "model.layers.21.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
157
+ "model.layers.21.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
158
+ "model.layers.22.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
159
+ "model.layers.22.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
160
+ "model.layers.22.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
161
+ "model.layers.22.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
162
+ "model.layers.22.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
163
+ "model.layers.22.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
164
+ "model.layers.22.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
165
+ "model.layers.22.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
166
+ "model.layers.22.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
167
+ "model.layers.22.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
168
+ "model.layers.23.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
169
+ "model.layers.23.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
170
+ "model.layers.23.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
171
+ "model.layers.23.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
172
+ "model.layers.23.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
173
+ "model.layers.23.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
174
+ "model.layers.23.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
175
+ "model.layers.23.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
176
+ "model.layers.23.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
177
+ "model.layers.23.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
178
+ "model.layers.24.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
179
+ "model.layers.24.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
180
+ "model.layers.24.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
181
+ "model.layers.24.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
182
+ "model.layers.24.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
183
+ "model.layers.24.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
184
+ "model.layers.24.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
185
+ "model.layers.24.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
186
+ "model.layers.24.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
187
+ "model.layers.24.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
188
+ "model.layers.25.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
189
+ "model.layers.25.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
190
+ "model.layers.25.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
191
+ "model.layers.25.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
192
+ "model.layers.25.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
193
+ "model.layers.25.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
194
+ "model.layers.25.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
195
+ "model.layers.25.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
196
+ "model.layers.25.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
197
+ "model.layers.25.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
198
+ "model.layers.26.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
199
+ "model.layers.26.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
200
+ "model.layers.26.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
201
+ "model.layers.26.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
202
+ "model.layers.26.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
203
+ "model.layers.26.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
204
+ "model.layers.26.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
205
+ "model.layers.26.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
206
+ "model.layers.26.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
207
+ "model.layers.26.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
208
+ "model.layers.27.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
209
+ "model.layers.27.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
210
+ "model.layers.27.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
211
+ "model.layers.27.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
212
+ "model.layers.27.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
213
+ "model.layers.27.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
214
+ "model.layers.27.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
215
+ "model.layers.27.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
216
+ "model.layers.27.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
217
+ "model.layers.27.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
218
+ "model.layers.28.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
219
+ "model.layers.28.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
220
+ "model.layers.28.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
221
+ "model.layers.28.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
222
+ "model.layers.28.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
223
+ "model.layers.28.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
224
+ "model.layers.28.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
225
+ "model.layers.28.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
226
+ "model.layers.28.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
227
+ "model.layers.28.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
228
+ "model.layers.29.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
229
+ "model.layers.29.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
230
+ "model.layers.29.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
231
+ "model.layers.29.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
232
+ "model.layers.29.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
233
+ "model.layers.29.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
234
+ "model.layers.29.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
235
+ "model.layers.29.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
236
+ "model.layers.29.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
237
+ "model.layers.29.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
238
+ "model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
239
+ "model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
240
+ "model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
241
+ "model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
242
+ "model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
243
+ "model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
244
+ "model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
245
+ "model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
246
+ "model.layers.3.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
247
+ "model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
248
+ "model.layers.30.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
249
+ "model.layers.30.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
250
+ "model.layers.30.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
251
+ "model.layers.30.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
252
+ "model.layers.30.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
253
+ "model.layers.30.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
254
+ "model.layers.30.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
255
+ "model.layers.30.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
256
+ "model.layers.30.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
257
+ "model.layers.30.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
258
+ "model.layers.31.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
259
+ "model.layers.31.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
260
+ "model.layers.31.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
261
+ "model.layers.31.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
262
+ "model.layers.31.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
263
+ "model.layers.31.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
264
+ "model.layers.31.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
265
+ "model.layers.31.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
266
+ "model.layers.31.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
267
+ "model.layers.31.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
268
+ "model.layers.32.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
269
+ "model.layers.32.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
270
+ "model.layers.32.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
271
+ "model.layers.32.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
272
+ "model.layers.32.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
273
+ "model.layers.32.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
274
+ "model.layers.32.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
275
+ "model.layers.32.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
276
+ "model.layers.32.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
277
+ "model.layers.32.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
278
+ "model.layers.33.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
279
+ "model.layers.33.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
280
+ "model.layers.33.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
281
+ "model.layers.33.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
282
+ "model.layers.33.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
283
+ "model.layers.33.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
284
+ "model.layers.33.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
285
+ "model.layers.33.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
286
+ "model.layers.33.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
287
+ "model.layers.33.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
288
+ "model.layers.34.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
289
+ "model.layers.34.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
290
+ "model.layers.34.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
291
+ "model.layers.34.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
292
+ "model.layers.34.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
293
+ "model.layers.34.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
294
+ "model.layers.34.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
295
+ "model.layers.34.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
296
+ "model.layers.34.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
297
+ "model.layers.34.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
298
+ "model.layers.35.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
299
+ "model.layers.35.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
300
+ "model.layers.35.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
301
+ "model.layers.35.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
302
+ "model.layers.35.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
303
+ "model.layers.35.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
304
+ "model.layers.35.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
305
+ "model.layers.35.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
306
+ "model.layers.35.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
307
+ "model.layers.35.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
308
+ "model.layers.36.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
309
+ "model.layers.36.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
310
+ "model.layers.36.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
311
+ "model.layers.36.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
312
+ "model.layers.36.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
313
+ "model.layers.36.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
314
+ "model.layers.36.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
315
+ "model.layers.36.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
316
+ "model.layers.36.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
317
+ "model.layers.36.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
318
+ "model.layers.37.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
319
+ "model.layers.37.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
320
+ "model.layers.37.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
321
+ "model.layers.37.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
322
+ "model.layers.37.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
323
+ "model.layers.37.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
324
+ "model.layers.37.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
325
+ "model.layers.37.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
326
+ "model.layers.37.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
327
+ "model.layers.37.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
328
+ "model.layers.38.input_layernorm.weight": "pytorch_model-00006-of-00006.bin",
329
+ "model.layers.38.mlp.down_proj.weight": "pytorch_model-00006-of-00006.bin",
330
+ "model.layers.38.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
331
+ "model.layers.38.mlp.up_proj.weight": "pytorch_model-00006-of-00006.bin",
332
+ "model.layers.38.post_attention_layernorm.weight": "pytorch_model-00006-of-00006.bin",
333
+ "model.layers.38.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
334
+ "model.layers.38.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
335
+ "model.layers.38.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
336
+ "model.layers.38.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
337
+ "model.layers.38.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
338
+ "model.layers.39.input_layernorm.weight": "pytorch_model-00006-of-00006.bin",
339
+ "model.layers.39.mlp.down_proj.weight": "pytorch_model-00006-of-00006.bin",
340
+ "model.layers.39.mlp.gate_proj.weight": "pytorch_model-00006-of-00006.bin",
341
+ "model.layers.39.mlp.up_proj.weight": "pytorch_model-00006-of-00006.bin",
342
+ "model.layers.39.post_attention_layernorm.weight": "pytorch_model-00006-of-00006.bin",
343
+ "model.layers.39.self_attn.k_proj.weight": "pytorch_model-00006-of-00006.bin",
344
+ "model.layers.39.self_attn.o_proj.weight": "pytorch_model-00006-of-00006.bin",
345
+ "model.layers.39.self_attn.q_proj.weight": "pytorch_model-00006-of-00006.bin",
346
+ "model.layers.39.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00006.bin",
347
+ "model.layers.39.self_attn.v_proj.weight": "pytorch_model-00006-of-00006.bin",
348
+ "model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
349
+ "model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
350
+ "model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
351
+ "model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
352
+ "model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
353
+ "model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
354
+ "model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
355
+ "model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
356
+ "model.layers.4.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
357
+ "model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
358
+ "model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
359
+ "model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
360
+ "model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
361
+ "model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
362
+ "model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
363
+ "model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
364
+ "model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
365
+ "model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
366
+ "model.layers.5.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
367
+ "model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
368
+ "model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
369
+ "model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
370
+ "model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
371
+ "model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
372
+ "model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
373
+ "model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
374
+ "model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
375
+ "model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
376
+ "model.layers.6.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
377
+ "model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
378
+ "model.layers.7.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
379
+ "model.layers.7.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
380
+ "model.layers.7.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
381
+ "model.layers.7.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
382
+ "model.layers.7.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
383
+ "model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
384
+ "model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
385
+ "model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
386
+ "model.layers.7.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
387
+ "model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
388
+ "model.layers.8.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
389
+ "model.layers.8.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
390
+ "model.layers.8.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
391
+ "model.layers.8.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
392
+ "model.layers.8.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
393
+ "model.layers.8.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
394
+ "model.layers.8.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
395
+ "model.layers.8.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
396
+ "model.layers.8.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
397
+ "model.layers.8.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
398
+ "model.layers.9.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
399
+ "model.layers.9.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
400
+ "model.layers.9.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
401
+ "model.layers.9.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
402
+ "model.layers.9.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
403
+ "model.layers.9.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
404
+ "model.layers.9.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
405
+ "model.layers.9.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
406
+ "model.layers.9.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
407
+ "model.layers.9.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
408
+ "model.norm.weight": "pytorch_model-00006-of-00006.bin"
409
+ }
410
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<unk>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": true,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "bos_token": {
5
+ "__type": "AddedToken",
6
+ "content": "<s>",
7
+ "lstrip": false,
8
+ "normalized": true,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "clean_up_tokenization_spaces": false,
13
+ "eos_token": {
14
+ "__type": "AddedToken",
15
+ "content": "</s>",
16
+ "lstrip": false,
17
+ "normalized": true,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "model_max_length": 4096,
22
+ "pad_token": null,
23
+ "padding_side": "right",
24
+ "sp_model_kwargs": {},
25
+ "tokenizer_class": "LlamaTokenizer",
26
+ "unk_token": {
27
+ "__type": "AddedToken",
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ }
34
+ }
trainer_state.json ADDED
@@ -0,0 +1,1996 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 3.0275229357798166,
5
+ "global_step": 330,
6
+ "is_hyper_param_search": false,
7
+ "is_local_process_zero": true,
8
+ "is_world_process_zero": true,
9
+ "log_history": [
10
+ {
11
+ "epoch": 0.01,
12
+ "learning_rate": 5.555555555555555e-07,
13
+ "loss": 0.6498,
14
+ "step": 1
15
+ },
16
+ {
17
+ "epoch": 0.02,
18
+ "learning_rate": 1.111111111111111e-06,
19
+ "loss": 0.6642,
20
+ "step": 2
21
+ },
22
+ {
23
+ "epoch": 0.03,
24
+ "learning_rate": 1.6666666666666667e-06,
25
+ "loss": 0.6479,
26
+ "step": 3
27
+ },
28
+ {
29
+ "epoch": 0.04,
30
+ "learning_rate": 2.222222222222222e-06,
31
+ "loss": 0.5813,
32
+ "step": 4
33
+ },
34
+ {
35
+ "epoch": 0.05,
36
+ "learning_rate": 2.7777777777777783e-06,
37
+ "loss": 0.6306,
38
+ "step": 5
39
+ },
40
+ {
41
+ "epoch": 0.06,
42
+ "learning_rate": 3.3333333333333333e-06,
43
+ "loss": 0.6881,
44
+ "step": 6
45
+ },
46
+ {
47
+ "epoch": 0.06,
48
+ "learning_rate": 3.88888888888889e-06,
49
+ "loss": 0.6066,
50
+ "step": 7
51
+ },
52
+ {
53
+ "epoch": 0.07,
54
+ "learning_rate": 4.444444444444444e-06,
55
+ "loss": 0.5688,
56
+ "step": 8
57
+ },
58
+ {
59
+ "epoch": 0.08,
60
+ "learning_rate": 5e-06,
61
+ "loss": 0.5619,
62
+ "step": 9
63
+ },
64
+ {
65
+ "epoch": 0.09,
66
+ "learning_rate": 5.555555555555557e-06,
67
+ "loss": 0.5144,
68
+ "step": 10
69
+ },
70
+ {
71
+ "epoch": 0.1,
72
+ "learning_rate": 6.111111111111112e-06,
73
+ "loss": 0.5744,
74
+ "step": 11
75
+ },
76
+ {
77
+ "epoch": 0.11,
78
+ "learning_rate": 6.666666666666667e-06,
79
+ "loss": 0.6051,
80
+ "step": 12
81
+ },
82
+ {
83
+ "epoch": 0.12,
84
+ "learning_rate": 7.222222222222223e-06,
85
+ "loss": 0.5917,
86
+ "step": 13
87
+ },
88
+ {
89
+ "epoch": 0.13,
90
+ "learning_rate": 7.77777777777778e-06,
91
+ "loss": 0.5239,
92
+ "step": 14
93
+ },
94
+ {
95
+ "epoch": 0.14,
96
+ "learning_rate": 8.333333333333334e-06,
97
+ "loss": 0.5253,
98
+ "step": 15
99
+ },
100
+ {
101
+ "epoch": 0.15,
102
+ "learning_rate": 8.888888888888888e-06,
103
+ "loss": 0.4143,
104
+ "step": 16
105
+ },
106
+ {
107
+ "epoch": 0.16,
108
+ "learning_rate": 9.444444444444445e-06,
109
+ "loss": 0.4822,
110
+ "step": 17
111
+ },
112
+ {
113
+ "epoch": 0.17,
114
+ "learning_rate": 1e-05,
115
+ "loss": 0.4627,
116
+ "step": 18
117
+ },
118
+ {
119
+ "epoch": 0.17,
120
+ "learning_rate": 9.999858783596665e-06,
121
+ "loss": 0.5389,
122
+ "step": 19
123
+ },
124
+ {
125
+ "epoch": 0.18,
126
+ "learning_rate": 9.999435142363484e-06,
127
+ "loss": 0.5544,
128
+ "step": 20
129
+ },
130
+ {
131
+ "epoch": 0.19,
132
+ "learning_rate": 9.998729100230497e-06,
133
+ "loss": 0.5013,
134
+ "step": 21
135
+ },
136
+ {
137
+ "epoch": 0.2,
138
+ "learning_rate": 9.997740697079595e-06,
139
+ "loss": 0.5035,
140
+ "step": 22
141
+ },
142
+ {
143
+ "epoch": 0.21,
144
+ "learning_rate": 9.99646998874227e-06,
145
+ "loss": 0.4924,
146
+ "step": 23
147
+ },
148
+ {
149
+ "epoch": 0.22,
150
+ "learning_rate": 9.994917046996472e-06,
151
+ "loss": 0.507,
152
+ "step": 24
153
+ },
154
+ {
155
+ "epoch": 0.23,
156
+ "learning_rate": 9.993081959562539e-06,
157
+ "loss": 0.4876,
158
+ "step": 25
159
+ },
160
+ {
161
+ "epoch": 0.24,
162
+ "learning_rate": 9.990964830098246e-06,
163
+ "loss": 0.555,
164
+ "step": 26
165
+ },
166
+ {
167
+ "epoch": 0.25,
168
+ "learning_rate": 9.98856577819296e-06,
169
+ "loss": 0.4692,
170
+ "step": 27
171
+ },
172
+ {
173
+ "epoch": 0.26,
174
+ "learning_rate": 9.985884939360873e-06,
175
+ "loss": 0.466,
176
+ "step": 28
177
+ },
178
+ {
179
+ "epoch": 0.27,
180
+ "learning_rate": 9.98292246503335e-06,
181
+ "loss": 0.4132,
182
+ "step": 29
183
+ },
184
+ {
185
+ "epoch": 0.28,
186
+ "learning_rate": 9.979678522550382e-06,
187
+ "loss": 0.4741,
188
+ "step": 30
189
+ },
190
+ {
191
+ "epoch": 0.28,
192
+ "learning_rate": 9.976153295151123e-06,
193
+ "loss": 0.5424,
194
+ "step": 31
195
+ },
196
+ {
197
+ "epoch": 0.29,
198
+ "learning_rate": 9.972346981963546e-06,
199
+ "loss": 0.4949,
200
+ "step": 32
201
+ },
202
+ {
203
+ "epoch": 0.3,
204
+ "learning_rate": 9.968259797993197e-06,
205
+ "loss": 0.5126,
206
+ "step": 33
207
+ },
208
+ {
209
+ "epoch": 0.31,
210
+ "learning_rate": 9.963891974111042e-06,
211
+ "loss": 0.4202,
212
+ "step": 34
213
+ },
214
+ {
215
+ "epoch": 0.32,
216
+ "learning_rate": 9.959243757040434e-06,
217
+ "loss": 0.4736,
218
+ "step": 35
219
+ },
220
+ {
221
+ "epoch": 0.33,
222
+ "learning_rate": 9.95431540934317e-06,
223
+ "loss": 0.3851,
224
+ "step": 36
225
+ },
226
+ {
227
+ "epoch": 0.34,
228
+ "learning_rate": 9.949107209404664e-06,
229
+ "loss": 0.5542,
230
+ "step": 37
231
+ },
232
+ {
233
+ "epoch": 0.35,
234
+ "learning_rate": 9.943619451418225e-06,
235
+ "loss": 0.5095,
236
+ "step": 38
237
+ },
238
+ {
239
+ "epoch": 0.36,
240
+ "learning_rate": 9.937852445368427e-06,
241
+ "loss": 0.4338,
242
+ "step": 39
243
+ },
244
+ {
245
+ "epoch": 0.37,
246
+ "learning_rate": 9.931806517013612e-06,
247
+ "loss": 0.45,
248
+ "step": 40
249
+ },
250
+ {
251
+ "epoch": 0.38,
252
+ "learning_rate": 9.925482007867485e-06,
253
+ "loss": 0.4735,
254
+ "step": 41
255
+ },
256
+ {
257
+ "epoch": 0.39,
258
+ "learning_rate": 9.918879275179819e-06,
259
+ "loss": 0.4735,
260
+ "step": 42
261
+ },
262
+ {
263
+ "epoch": 0.39,
264
+ "learning_rate": 9.911998691916275e-06,
265
+ "loss": 0.5041,
266
+ "step": 43
267
+ },
268
+ {
269
+ "epoch": 0.4,
270
+ "learning_rate": 9.904840646737346e-06,
271
+ "loss": 0.4432,
272
+ "step": 44
273
+ },
274
+ {
275
+ "epoch": 0.41,
276
+ "learning_rate": 9.89740554397639e-06,
277
+ "loss": 0.4457,
278
+ "step": 45
279
+ },
280
+ {
281
+ "epoch": 0.42,
282
+ "learning_rate": 9.889693803616793e-06,
283
+ "loss": 0.4692,
284
+ "step": 46
285
+ },
286
+ {
287
+ "epoch": 0.43,
288
+ "learning_rate": 9.881705861268252e-06,
289
+ "loss": 0.4469,
290
+ "step": 47
291
+ },
292
+ {
293
+ "epoch": 0.44,
294
+ "learning_rate": 9.873442168142158e-06,
295
+ "loss": 0.4389,
296
+ "step": 48
297
+ },
298
+ {
299
+ "epoch": 0.45,
300
+ "learning_rate": 9.864903191026125e-06,
301
+ "loss": 0.4195,
302
+ "step": 49
303
+ },
304
+ {
305
+ "epoch": 0.46,
306
+ "learning_rate": 9.856089412257605e-06,
307
+ "loss": 0.4635,
308
+ "step": 50
309
+ },
310
+ {
311
+ "epoch": 0.47,
312
+ "learning_rate": 9.847001329696653e-06,
313
+ "loss": 0.4944,
314
+ "step": 51
315
+ },
316
+ {
317
+ "epoch": 0.48,
318
+ "learning_rate": 9.837639456697802e-06,
319
+ "loss": 0.4917,
320
+ "step": 52
321
+ },
322
+ {
323
+ "epoch": 0.49,
324
+ "learning_rate": 9.828004322081067e-06,
325
+ "loss": 0.4197,
326
+ "step": 53
327
+ },
328
+ {
329
+ "epoch": 0.5,
330
+ "learning_rate": 9.818096470102067e-06,
331
+ "loss": 0.4294,
332
+ "step": 54
333
+ },
334
+ {
335
+ "epoch": 0.5,
336
+ "learning_rate": 9.807916460421294e-06,
337
+ "loss": 0.4839,
338
+ "step": 55
339
+ },
340
+ {
341
+ "epoch": 0.51,
342
+ "learning_rate": 9.797464868072489e-06,
343
+ "loss": 0.4918,
344
+ "step": 56
345
+ },
346
+ {
347
+ "epoch": 0.52,
348
+ "learning_rate": 9.78674228343016e-06,
349
+ "loss": 0.5003,
350
+ "step": 57
351
+ },
352
+ {
353
+ "epoch": 0.53,
354
+ "learning_rate": 9.775749312176249e-06,
355
+ "loss": 0.4864,
356
+ "step": 58
357
+ },
358
+ {
359
+ "epoch": 0.54,
360
+ "learning_rate": 9.764486575265893e-06,
361
+ "loss": 0.4977,
362
+ "step": 59
363
+ },
364
+ {
365
+ "epoch": 0.55,
366
+ "learning_rate": 9.752954708892379e-06,
367
+ "loss": 0.4125,
368
+ "step": 60
369
+ },
370
+ {
371
+ "epoch": 0.56,
372
+ "learning_rate": 9.741154364451179e-06,
373
+ "loss": 0.4442,
374
+ "step": 61
375
+ },
376
+ {
377
+ "epoch": 0.57,
378
+ "learning_rate": 9.729086208503174e-06,
379
+ "loss": 0.4624,
380
+ "step": 62
381
+ },
382
+ {
383
+ "epoch": 0.58,
384
+ "learning_rate": 9.716750922736998e-06,
385
+ "loss": 0.3754,
386
+ "step": 63
387
+ },
388
+ {
389
+ "epoch": 0.59,
390
+ "learning_rate": 9.704149203930522e-06,
391
+ "loss": 0.4218,
392
+ "step": 64
393
+ },
394
+ {
395
+ "epoch": 0.6,
396
+ "learning_rate": 9.691281763911513e-06,
397
+ "loss": 0.4997,
398
+ "step": 65
399
+ },
400
+ {
401
+ "epoch": 0.61,
402
+ "learning_rate": 9.67814932951741e-06,
403
+ "loss": 0.3829,
404
+ "step": 66
405
+ },
406
+ {
407
+ "epoch": 0.61,
408
+ "learning_rate": 9.664752642554272e-06,
409
+ "loss": 0.3637,
410
+ "step": 67
411
+ },
412
+ {
413
+ "epoch": 0.62,
414
+ "learning_rate": 9.651092459754879e-06,
415
+ "loss": 0.5271,
416
+ "step": 68
417
+ },
418
+ {
419
+ "epoch": 0.63,
420
+ "learning_rate": 9.637169552735985e-06,
421
+ "loss": 0.4425,
422
+ "step": 69
423
+ },
424
+ {
425
+ "epoch": 0.64,
426
+ "learning_rate": 9.622984707954732e-06,
427
+ "loss": 0.4109,
428
+ "step": 70
429
+ },
430
+ {
431
+ "epoch": 0.65,
432
+ "learning_rate": 9.608538726664224e-06,
433
+ "loss": 0.3699,
434
+ "step": 71
435
+ },
436
+ {
437
+ "epoch": 0.66,
438
+ "learning_rate": 9.593832424868271e-06,
439
+ "loss": 0.4199,
440
+ "step": 72
441
+ },
442
+ {
443
+ "epoch": 0.67,
444
+ "learning_rate": 9.578866633275289e-06,
445
+ "loss": 0.4548,
446
+ "step": 73
447
+ },
448
+ {
449
+ "epoch": 0.68,
450
+ "learning_rate": 9.563642197251382e-06,
451
+ "loss": 0.4464,
452
+ "step": 74
453
+ },
454
+ {
455
+ "epoch": 0.69,
456
+ "learning_rate": 9.548159976772593e-06,
457
+ "loss": 0.4286,
458
+ "step": 75
459
+ },
460
+ {
461
+ "epoch": 0.7,
462
+ "learning_rate": 9.532420846376316e-06,
463
+ "loss": 0.397,
464
+ "step": 76
465
+ },
466
+ {
467
+ "epoch": 0.71,
468
+ "learning_rate": 9.516425695111906e-06,
469
+ "loss": 0.4522,
470
+ "step": 77
471
+ },
472
+ {
473
+ "epoch": 0.72,
474
+ "learning_rate": 9.500175426490455e-06,
475
+ "loss": 0.4763,
476
+ "step": 78
477
+ },
478
+ {
479
+ "epoch": 0.72,
480
+ "learning_rate": 9.48367095843376e-06,
481
+ "loss": 0.4386,
482
+ "step": 79
483
+ },
484
+ {
485
+ "epoch": 0.73,
486
+ "learning_rate": 9.466913223222467e-06,
487
+ "loss": 0.411,
488
+ "step": 80
489
+ },
490
+ {
491
+ "epoch": 0.74,
492
+ "learning_rate": 9.449903167443415e-06,
493
+ "loss": 0.3719,
494
+ "step": 81
495
+ },
496
+ {
497
+ "epoch": 0.75,
498
+ "learning_rate": 9.432641751936162e-06,
499
+ "loss": 0.3693,
500
+ "step": 82
501
+ },
502
+ {
503
+ "epoch": 0.76,
504
+ "learning_rate": 9.415129951738713e-06,
505
+ "loss": 0.3955,
506
+ "step": 83
507
+ },
508
+ {
509
+ "epoch": 0.77,
510
+ "learning_rate": 9.397368756032445e-06,
511
+ "loss": 0.4196,
512
+ "step": 84
513
+ },
514
+ {
515
+ "epoch": 0.78,
516
+ "learning_rate": 9.379359168086231e-06,
517
+ "loss": 0.4527,
518
+ "step": 85
519
+ },
520
+ {
521
+ "epoch": 0.79,
522
+ "learning_rate": 9.361102205199762e-06,
523
+ "loss": 0.4054,
524
+ "step": 86
525
+ },
526
+ {
527
+ "epoch": 0.8,
528
+ "learning_rate": 9.34259889864609e-06,
529
+ "loss": 0.4001,
530
+ "step": 87
531
+ },
532
+ {
533
+ "epoch": 0.81,
534
+ "learning_rate": 9.32385029361338e-06,
535
+ "loss": 0.4189,
536
+ "step": 88
537
+ },
538
+ {
539
+ "epoch": 0.82,
540
+ "learning_rate": 9.304857449145858e-06,
541
+ "loss": 0.439,
542
+ "step": 89
543
+ },
544
+ {
545
+ "epoch": 0.83,
546
+ "learning_rate": 9.285621438083997e-06,
547
+ "loss": 0.4221,
548
+ "step": 90
549
+ },
550
+ {
551
+ "epoch": 0.83,
552
+ "learning_rate": 9.26614334700392e-06,
553
+ "loss": 0.4208,
554
+ "step": 91
555
+ },
556
+ {
557
+ "epoch": 0.84,
558
+ "learning_rate": 9.246424276156008e-06,
559
+ "loss": 0.4707,
560
+ "step": 92
561
+ },
562
+ {
563
+ "epoch": 0.85,
564
+ "learning_rate": 9.226465339402768e-06,
565
+ "loss": 0.3372,
566
+ "step": 93
567
+ },
568
+ {
569
+ "epoch": 0.86,
570
+ "learning_rate": 9.206267664155906e-06,
571
+ "loss": 0.3637,
572
+ "step": 94
573
+ },
574
+ {
575
+ "epoch": 0.87,
576
+ "learning_rate": 9.185832391312644e-06,
577
+ "loss": 0.4146,
578
+ "step": 95
579
+ },
580
+ {
581
+ "epoch": 0.88,
582
+ "learning_rate": 9.165160675191272e-06,
583
+ "loss": 0.393,
584
+ "step": 96
585
+ },
586
+ {
587
+ "epoch": 0.89,
588
+ "learning_rate": 9.144253683465953e-06,
589
+ "loss": 0.4712,
590
+ "step": 97
591
+ },
592
+ {
593
+ "epoch": 0.9,
594
+ "learning_rate": 9.123112597100759e-06,
595
+ "loss": 0.3904,
596
+ "step": 98
597
+ },
598
+ {
599
+ "epoch": 0.91,
600
+ "learning_rate": 9.101738610282956e-06,
601
+ "loss": 0.4192,
602
+ "step": 99
603
+ },
604
+ {
605
+ "epoch": 0.92,
606
+ "learning_rate": 9.080132930355567e-06,
607
+ "loss": 0.4039,
608
+ "step": 100
609
+ },
610
+ {
611
+ "epoch": 0.93,
612
+ "learning_rate": 9.058296777749154e-06,
613
+ "loss": 0.4516,
614
+ "step": 101
615
+ },
616
+ {
617
+ "epoch": 0.94,
618
+ "learning_rate": 9.03623138591289e-06,
619
+ "loss": 0.3348,
620
+ "step": 102
621
+ },
622
+ {
623
+ "epoch": 0.94,
624
+ "learning_rate": 9.013938001244885e-06,
625
+ "loss": 0.4256,
626
+ "step": 103
627
+ },
628
+ {
629
+ "epoch": 0.95,
630
+ "learning_rate": 8.99141788302178e-06,
631
+ "loss": 0.3774,
632
+ "step": 104
633
+ },
634
+ {
635
+ "epoch": 0.96,
636
+ "learning_rate": 8.968672303327614e-06,
637
+ "loss": 0.4471,
638
+ "step": 105
639
+ },
640
+ {
641
+ "epoch": 0.97,
642
+ "learning_rate": 8.94570254698197e-06,
643
+ "loss": 0.3771,
644
+ "step": 106
645
+ },
646
+ {
647
+ "epoch": 0.98,
648
+ "learning_rate": 8.922509911467395e-06,
649
+ "loss": 0.4137,
650
+ "step": 107
651
+ },
652
+ {
653
+ "epoch": 0.99,
654
+ "learning_rate": 8.899095706856122e-06,
655
+ "loss": 0.4269,
656
+ "step": 108
657
+ },
658
+ {
659
+ "epoch": 1.0,
660
+ "learning_rate": 8.875461255736055e-06,
661
+ "loss": 0.4102,
662
+ "step": 109
663
+ },
664
+ {
665
+ "epoch": 1.01,
666
+ "learning_rate": 8.851607893136065e-06,
667
+ "loss": 0.2619,
668
+ "step": 110
669
+ },
670
+ {
671
+ "epoch": 1.02,
672
+ "learning_rate": 8.827536966450584e-06,
673
+ "loss": 0.2864,
674
+ "step": 111
675
+ },
676
+ {
677
+ "epoch": 1.03,
678
+ "learning_rate": 8.803249835363486e-06,
679
+ "loss": 0.2712,
680
+ "step": 112
681
+ },
682
+ {
683
+ "epoch": 1.04,
684
+ "learning_rate": 8.778747871771293e-06,
685
+ "loss": 0.2186,
686
+ "step": 113
687
+ },
688
+ {
689
+ "epoch": 1.05,
690
+ "learning_rate": 8.754032459705672e-06,
691
+ "loss": 0.2585,
692
+ "step": 114
693
+ },
694
+ {
695
+ "epoch": 1.06,
696
+ "learning_rate": 8.729104995255265e-06,
697
+ "loss": 0.2711,
698
+ "step": 115
699
+ },
700
+ {
701
+ "epoch": 1.06,
702
+ "learning_rate": 8.703966886486819e-06,
703
+ "loss": 0.2801,
704
+ "step": 116
705
+ },
706
+ {
707
+ "epoch": 1.07,
708
+ "learning_rate": 8.67861955336566e-06,
709
+ "loss": 0.2586,
710
+ "step": 117
711
+ },
712
+ {
713
+ "epoch": 1.08,
714
+ "learning_rate": 8.65306442767547e-06,
715
+ "loss": 0.3134,
716
+ "step": 118
717
+ },
718
+ {
719
+ "epoch": 1.09,
720
+ "learning_rate": 8.627302952937431e-06,
721
+ "loss": 0.2067,
722
+ "step": 119
723
+ },
724
+ {
725
+ "epoch": 1.1,
726
+ "learning_rate": 8.601336584328659e-06,
727
+ "loss": 0.2637,
728
+ "step": 120
729
+ },
730
+ {
731
+ "epoch": 1.11,
732
+ "learning_rate": 8.575166788600031e-06,
733
+ "loss": 0.2445,
734
+ "step": 121
735
+ },
736
+ {
737
+ "epoch": 1.12,
738
+ "learning_rate": 8.548795043993316e-06,
739
+ "loss": 0.2456,
740
+ "step": 122
741
+ },
742
+ {
743
+ "epoch": 1.13,
744
+ "learning_rate": 8.522222840157687e-06,
745
+ "loss": 0.2963,
746
+ "step": 123
747
+ },
748
+ {
749
+ "epoch": 1.14,
750
+ "learning_rate": 8.495451678065563e-06,
751
+ "loss": 0.2781,
752
+ "step": 124
753
+ },
754
+ {
755
+ "epoch": 1.15,
756
+ "learning_rate": 8.468483069927832e-06,
757
+ "loss": 0.2572,
758
+ "step": 125
759
+ },
760
+ {
761
+ "epoch": 1.16,
762
+ "learning_rate": 8.441318539108433e-06,
763
+ "loss": 0.2473,
764
+ "step": 126
765
+ },
766
+ {
767
+ "epoch": 1.17,
768
+ "learning_rate": 8.413959620038306e-06,
769
+ "loss": 0.1962,
770
+ "step": 127
771
+ },
772
+ {
773
+ "epoch": 1.17,
774
+ "learning_rate": 8.386407858128707e-06,
775
+ "loss": 0.2644,
776
+ "step": 128
777
+ },
778
+ {
779
+ "epoch": 1.18,
780
+ "learning_rate": 8.358664809683926e-06,
781
+ "loss": 0.2451,
782
+ "step": 129
783
+ },
784
+ {
785
+ "epoch": 1.19,
786
+ "learning_rate": 8.330732041813367e-06,
787
+ "loss": 0.2801,
788
+ "step": 130
789
+ },
790
+ {
791
+ "epoch": 1.2,
792
+ "learning_rate": 8.302611132343042e-06,
793
+ "loss": 0.2425,
794
+ "step": 131
795
+ },
796
+ {
797
+ "epoch": 1.21,
798
+ "learning_rate": 8.274303669726427e-06,
799
+ "loss": 0.2113,
800
+ "step": 132
801
+ },
802
+ {
803
+ "epoch": 1.22,
804
+ "learning_rate": 8.245811252954741e-06,
805
+ "loss": 0.2918,
806
+ "step": 133
807
+ },
808
+ {
809
+ "epoch": 1.23,
810
+ "learning_rate": 8.217135491466636e-06,
811
+ "loss": 0.2356,
812
+ "step": 134
813
+ },
814
+ {
815
+ "epoch": 1.24,
816
+ "learning_rate": 8.18827800505727e-06,
817
+ "loss": 0.2809,
818
+ "step": 135
819
+ },
820
+ {
821
+ "epoch": 1.25,
822
+ "learning_rate": 8.15924042378682e-06,
823
+ "loss": 0.2559,
824
+ "step": 136
825
+ },
826
+ {
827
+ "epoch": 1.26,
828
+ "learning_rate": 8.130024387888402e-06,
829
+ "loss": 0.2446,
830
+ "step": 137
831
+ },
832
+ {
833
+ "epoch": 1.27,
834
+ "learning_rate": 8.100631547675417e-06,
835
+ "loss": 0.2314,
836
+ "step": 138
837
+ },
838
+ {
839
+ "epoch": 1.28,
840
+ "learning_rate": 8.071063563448341e-06,
841
+ "loss": 0.2729,
842
+ "step": 139
843
+ },
844
+ {
845
+ "epoch": 1.28,
846
+ "learning_rate": 8.041322105400923e-06,
847
+ "loss": 0.2017,
848
+ "step": 140
849
+ },
850
+ {
851
+ "epoch": 1.29,
852
+ "learning_rate": 8.01140885352586e-06,
853
+ "loss": 0.2195,
854
+ "step": 141
855
+ },
856
+ {
857
+ "epoch": 1.3,
858
+ "learning_rate": 7.981325497519892e-06,
859
+ "loss": 0.2934,
860
+ "step": 142
861
+ },
862
+ {
863
+ "epoch": 1.31,
864
+ "learning_rate": 7.951073736688348e-06,
865
+ "loss": 0.2598,
866
+ "step": 143
867
+ },
868
+ {
869
+ "epoch": 1.32,
870
+ "learning_rate": 7.920655279849173e-06,
871
+ "loss": 0.2776,
872
+ "step": 144
873
+ },
874
+ {
875
+ "epoch": 1.33,
876
+ "learning_rate": 7.890071845236395e-06,
877
+ "loss": 0.2622,
878
+ "step": 145
879
+ },
880
+ {
881
+ "epoch": 1.34,
882
+ "learning_rate": 7.859325160403073e-06,
883
+ "loss": 0.2621,
884
+ "step": 146
885
+ },
886
+ {
887
+ "epoch": 1.35,
888
+ "learning_rate": 7.8284169621237e-06,
889
+ "loss": 0.2499,
890
+ "step": 147
891
+ },
892
+ {
893
+ "epoch": 1.36,
894
+ "learning_rate": 7.797348996296116e-06,
895
+ "loss": 0.28,
896
+ "step": 148
897
+ },
898
+ {
899
+ "epoch": 1.37,
900
+ "learning_rate": 7.766123017842877e-06,
901
+ "loss": 0.1906,
902
+ "step": 149
903
+ },
904
+ {
905
+ "epoch": 1.38,
906
+ "learning_rate": 7.734740790612137e-06,
907
+ "loss": 0.2373,
908
+ "step": 150
909
+ },
910
+ {
911
+ "epoch": 1.39,
912
+ "learning_rate": 7.703204087277989e-06,
913
+ "loss": 0.2,
914
+ "step": 151
915
+ },
916
+ {
917
+ "epoch": 1.39,
918
+ "learning_rate": 7.671514689240366e-06,
919
+ "loss": 0.2678,
920
+ "step": 152
921
+ },
922
+ {
923
+ "epoch": 1.4,
924
+ "learning_rate": 7.639674386524395e-06,
925
+ "loss": 0.2295,
926
+ "step": 153
927
+ },
928
+ {
929
+ "epoch": 1.41,
930
+ "learning_rate": 7.607684977679284e-06,
931
+ "loss": 0.2417,
932
+ "step": 154
933
+ },
934
+ {
935
+ "epoch": 1.42,
936
+ "learning_rate": 7.575548269676741e-06,
937
+ "loss": 0.2662,
938
+ "step": 155
939
+ },
940
+ {
941
+ "epoch": 1.43,
942
+ "learning_rate": 7.543266077808893e-06,
943
+ "loss": 0.2645,
944
+ "step": 156
945
+ },
946
+ {
947
+ "epoch": 1.44,
948
+ "learning_rate": 7.510840225585749e-06,
949
+ "loss": 0.2757,
950
+ "step": 157
951
+ },
952
+ {
953
+ "epoch": 1.45,
954
+ "learning_rate": 7.478272544632204e-06,
955
+ "loss": 0.2324,
956
+ "step": 158
957
+ },
958
+ {
959
+ "epoch": 1.46,
960
+ "learning_rate": 7.44556487458456e-06,
961
+ "loss": 0.2655,
962
+ "step": 159
963
+ },
964
+ {
965
+ "epoch": 1.47,
966
+ "learning_rate": 7.412719062986632e-06,
967
+ "loss": 0.2091,
968
+ "step": 160
969
+ },
970
+ {
971
+ "epoch": 1.48,
972
+ "learning_rate": 7.379736965185369e-06,
973
+ "loss": 0.2293,
974
+ "step": 161
975
+ },
976
+ {
977
+ "epoch": 1.49,
978
+ "learning_rate": 7.3466204442260605e-06,
979
+ "loss": 0.246,
980
+ "step": 162
981
+ },
982
+ {
983
+ "epoch": 1.5,
984
+ "learning_rate": 7.313371370747104e-06,
985
+ "loss": 0.2362,
986
+ "step": 163
987
+ },
988
+ {
989
+ "epoch": 1.5,
990
+ "learning_rate": 7.279991622874319e-06,
991
+ "loss": 0.2074,
992
+ "step": 164
993
+ },
994
+ {
995
+ "epoch": 1.51,
996
+ "learning_rate": 7.24648308611489e-06,
997
+ "loss": 0.2035,
998
+ "step": 165
999
+ },
1000
+ {
1001
+ "epoch": 1.52,
1002
+ "learning_rate": 7.212847653250828e-06,
1003
+ "loss": 0.3013,
1004
+ "step": 166
1005
+ },
1006
+ {
1007
+ "epoch": 1.53,
1008
+ "learning_rate": 7.1790872242320775e-06,
1009
+ "loss": 0.2286,
1010
+ "step": 167
1011
+ },
1012
+ {
1013
+ "epoch": 1.54,
1014
+ "learning_rate": 7.145203706069183e-06,
1015
+ "loss": 0.2005,
1016
+ "step": 168
1017
+ },
1018
+ {
1019
+ "epoch": 1.55,
1020
+ "learning_rate": 7.1111990127255684e-06,
1021
+ "loss": 0.2351,
1022
+ "step": 169
1023
+ },
1024
+ {
1025
+ "epoch": 1.56,
1026
+ "learning_rate": 7.0770750650094335e-06,
1027
+ "loss": 0.2378,
1028
+ "step": 170
1029
+ },
1030
+ {
1031
+ "epoch": 1.57,
1032
+ "learning_rate": 7.042833790465241e-06,
1033
+ "loss": 0.2557,
1034
+ "step": 171
1035
+ },
1036
+ {
1037
+ "epoch": 1.58,
1038
+ "learning_rate": 7.008477123264849e-06,
1039
+ "loss": 0.1742,
1040
+ "step": 172
1041
+ },
1042
+ {
1043
+ "epoch": 1.59,
1044
+ "learning_rate": 6.974007004098243e-06,
1045
+ "loss": 0.2862,
1046
+ "step": 173
1047
+ },
1048
+ {
1049
+ "epoch": 1.6,
1050
+ "learning_rate": 6.939425380063924e-06,
1051
+ "loss": 0.2359,
1052
+ "step": 174
1053
+ },
1054
+ {
1055
+ "epoch": 1.61,
1056
+ "learning_rate": 6.9047342045589224e-06,
1057
+ "loss": 0.2646,
1058
+ "step": 175
1059
+ },
1060
+ {
1061
+ "epoch": 1.61,
1062
+ "learning_rate": 6.869935437168449e-06,
1063
+ "loss": 0.2223,
1064
+ "step": 176
1065
+ },
1066
+ {
1067
+ "epoch": 1.62,
1068
+ "learning_rate": 6.835031043555211e-06,
1069
+ "loss": 0.2292,
1070
+ "step": 177
1071
+ },
1072
+ {
1073
+ "epoch": 1.63,
1074
+ "learning_rate": 6.800022995348381e-06,
1075
+ "loss": 0.2395,
1076
+ "step": 178
1077
+ },
1078
+ {
1079
+ "epoch": 1.64,
1080
+ "learning_rate": 6.76491327003222e-06,
1081
+ "loss": 0.2201,
1082
+ "step": 179
1083
+ },
1084
+ {
1085
+ "epoch": 1.65,
1086
+ "learning_rate": 6.729703850834381e-06,
1087
+ "loss": 0.2071,
1088
+ "step": 180
1089
+ },
1090
+ {
1091
+ "epoch": 1.66,
1092
+ "learning_rate": 6.694396726613883e-06,
1093
+ "loss": 0.2461,
1094
+ "step": 181
1095
+ },
1096
+ {
1097
+ "epoch": 1.67,
1098
+ "learning_rate": 6.65899389174876e-06,
1099
+ "loss": 0.2804,
1100
+ "step": 182
1101
+ },
1102
+ {
1103
+ "epoch": 1.68,
1104
+ "learning_rate": 6.6234973460234184e-06,
1105
+ "loss": 0.2859,
1106
+ "step": 183
1107
+ },
1108
+ {
1109
+ "epoch": 1.69,
1110
+ "learning_rate": 6.587909094515663e-06,
1111
+ "loss": 0.2358,
1112
+ "step": 184
1113
+ },
1114
+ {
1115
+ "epoch": 1.7,
1116
+ "learning_rate": 6.552231147483448e-06,
1117
+ "loss": 0.229,
1118
+ "step": 185
1119
+ },
1120
+ {
1121
+ "epoch": 1.71,
1122
+ "learning_rate": 6.5164655202513135e-06,
1123
+ "loss": 0.2673,
1124
+ "step": 186
1125
+ },
1126
+ {
1127
+ "epoch": 1.72,
1128
+ "learning_rate": 6.480614233096558e-06,
1129
+ "loss": 0.2169,
1130
+ "step": 187
1131
+ },
1132
+ {
1133
+ "epoch": 1.72,
1134
+ "learning_rate": 6.444679311135112e-06,
1135
+ "loss": 0.1908,
1136
+ "step": 188
1137
+ },
1138
+ {
1139
+ "epoch": 1.73,
1140
+ "learning_rate": 6.408662784207149e-06,
1141
+ "loss": 0.2167,
1142
+ "step": 189
1143
+ },
1144
+ {
1145
+ "epoch": 1.74,
1146
+ "learning_rate": 6.372566686762427e-06,
1147
+ "loss": 0.2533,
1148
+ "step": 190
1149
+ },
1150
+ {
1151
+ "epoch": 1.75,
1152
+ "learning_rate": 6.336393057745365e-06,
1153
+ "loss": 0.2361,
1154
+ "step": 191
1155
+ },
1156
+ {
1157
+ "epoch": 1.76,
1158
+ "learning_rate": 6.300143940479881e-06,
1159
+ "loss": 0.2396,
1160
+ "step": 192
1161
+ },
1162
+ {
1163
+ "epoch": 1.77,
1164
+ "learning_rate": 6.2638213825539595e-06,
1165
+ "loss": 0.2716,
1166
+ "step": 193
1167
+ },
1168
+ {
1169
+ "epoch": 1.78,
1170
+ "learning_rate": 6.227427435703997e-06,
1171
+ "loss": 0.2187,
1172
+ "step": 194
1173
+ },
1174
+ {
1175
+ "epoch": 1.79,
1176
+ "learning_rate": 6.190964155698903e-06,
1177
+ "loss": 0.2419,
1178
+ "step": 195
1179
+ },
1180
+ {
1181
+ "epoch": 1.8,
1182
+ "learning_rate": 6.154433602223979e-06,
1183
+ "loss": 0.2547,
1184
+ "step": 196
1185
+ },
1186
+ {
1187
+ "epoch": 1.81,
1188
+ "learning_rate": 6.117837838764579e-06,
1189
+ "loss": 0.2534,
1190
+ "step": 197
1191
+ },
1192
+ {
1193
+ "epoch": 1.82,
1194
+ "learning_rate": 6.0811789324895365e-06,
1195
+ "loss": 0.255,
1196
+ "step": 198
1197
+ },
1198
+ {
1199
+ "epoch": 1.83,
1200
+ "learning_rate": 6.044458954134411e-06,
1201
+ "loss": 0.2383,
1202
+ "step": 199
1203
+ },
1204
+ {
1205
+ "epoch": 1.83,
1206
+ "learning_rate": 6.0076799778845105e-06,
1207
+ "loss": 0.2407,
1208
+ "step": 200
1209
+ },
1210
+ {
1211
+ "epoch": 1.84,
1212
+ "learning_rate": 5.970844081257734e-06,
1213
+ "loss": 0.1883,
1214
+ "step": 201
1215
+ },
1216
+ {
1217
+ "epoch": 1.85,
1218
+ "learning_rate": 5.933953344987215e-06,
1219
+ "loss": 0.3112,
1220
+ "step": 202
1221
+ },
1222
+ {
1223
+ "epoch": 1.86,
1224
+ "learning_rate": 5.897009852903792e-06,
1225
+ "loss": 0.2424,
1226
+ "step": 203
1227
+ },
1228
+ {
1229
+ "epoch": 1.87,
1230
+ "learning_rate": 5.860015691818292e-06,
1231
+ "loss": 0.2663,
1232
+ "step": 204
1233
+ },
1234
+ {
1235
+ "epoch": 1.88,
1236
+ "learning_rate": 5.82297295140367e-06,
1237
+ "loss": 0.2616,
1238
+ "step": 205
1239
+ },
1240
+ {
1241
+ "epoch": 1.89,
1242
+ "learning_rate": 5.78588372407695e-06,
1243
+ "loss": 0.2067,
1244
+ "step": 206
1245
+ },
1246
+ {
1247
+ "epoch": 1.9,
1248
+ "learning_rate": 5.748750104881051e-06,
1249
+ "loss": 0.2092,
1250
+ "step": 207
1251
+ },
1252
+ {
1253
+ "epoch": 1.91,
1254
+ "learning_rate": 5.711574191366427e-06,
1255
+ "loss": 0.2383,
1256
+ "step": 208
1257
+ },
1258
+ {
1259
+ "epoch": 1.92,
1260
+ "learning_rate": 5.674358083472598e-06,
1261
+ "loss": 0.2377,
1262
+ "step": 209
1263
+ },
1264
+ {
1265
+ "epoch": 1.93,
1266
+ "learning_rate": 5.637103883409525e-06,
1267
+ "loss": 0.2415,
1268
+ "step": 210
1269
+ },
1270
+ {
1271
+ "epoch": 1.94,
1272
+ "learning_rate": 5.599813695538866e-06,
1273
+ "loss": 0.2478,
1274
+ "step": 211
1275
+ },
1276
+ {
1277
+ "epoch": 1.94,
1278
+ "learning_rate": 5.562489626255104e-06,
1279
+ "loss": 0.2752,
1280
+ "step": 212
1281
+ },
1282
+ {
1283
+ "epoch": 1.95,
1284
+ "learning_rate": 5.52513378386657e-06,
1285
+ "loss": 0.2133,
1286
+ "step": 213
1287
+ },
1288
+ {
1289
+ "epoch": 1.96,
1290
+ "learning_rate": 5.487748278476342e-06,
1291
+ "loss": 0.2043,
1292
+ "step": 214
1293
+ },
1294
+ {
1295
+ "epoch": 1.97,
1296
+ "learning_rate": 5.450335221863068e-06,
1297
+ "loss": 0.2664,
1298
+ "step": 215
1299
+ },
1300
+ {
1301
+ "epoch": 1.98,
1302
+ "learning_rate": 5.412896727361663e-06,
1303
+ "loss": 0.2617,
1304
+ "step": 216
1305
+ },
1306
+ {
1307
+ "epoch": 1.99,
1308
+ "learning_rate": 5.375434909743942e-06,
1309
+ "loss": 0.2001,
1310
+ "step": 217
1311
+ },
1312
+ {
1313
+ "epoch": 2.0,
1314
+ "learning_rate": 5.337951885099167e-06,
1315
+ "loss": 0.2353,
1316
+ "step": 218
1317
+ },
1318
+ {
1319
+ "epoch": 2.01,
1320
+ "learning_rate": 5.300449770714502e-06,
1321
+ "loss": 0.112,
1322
+ "step": 219
1323
+ },
1324
+ {
1325
+ "epoch": 2.02,
1326
+ "learning_rate": 5.262930684955439e-06,
1327
+ "loss": 0.1022,
1328
+ "step": 220
1329
+ },
1330
+ {
1331
+ "epoch": 2.03,
1332
+ "learning_rate": 5.225396747146112e-06,
1333
+ "loss": 0.1263,
1334
+ "step": 221
1335
+ },
1336
+ {
1337
+ "epoch": 2.04,
1338
+ "learning_rate": 5.187850077449604e-06,
1339
+ "loss": 0.1059,
1340
+ "step": 222
1341
+ },
1342
+ {
1343
+ "epoch": 2.05,
1344
+ "learning_rate": 5.150292796748174e-06,
1345
+ "loss": 0.1101,
1346
+ "step": 223
1347
+ },
1348
+ {
1349
+ "epoch": 2.06,
1350
+ "learning_rate": 5.112727026523461e-06,
1351
+ "loss": 0.1274,
1352
+ "step": 224
1353
+ },
1354
+ {
1355
+ "epoch": 2.06,
1356
+ "learning_rate": 5.075154888736653e-06,
1357
+ "loss": 0.1604,
1358
+ "step": 225
1359
+ },
1360
+ {
1361
+ "epoch": 2.07,
1362
+ "learning_rate": 5.03757850570861e-06,
1363
+ "loss": 0.1067,
1364
+ "step": 226
1365
+ },
1366
+ {
1367
+ "epoch": 2.08,
1368
+ "learning_rate": 5e-06,
1369
+ "loss": 0.1124,
1370
+ "step": 227
1371
+ },
1372
+ {
1373
+ "epoch": 2.09,
1374
+ "learning_rate": 4.9624214942913916e-06,
1375
+ "loss": 0.0976,
1376
+ "step": 228
1377
+ },
1378
+ {
1379
+ "epoch": 2.1,
1380
+ "learning_rate": 4.924845111263349e-06,
1381
+ "loss": 0.1094,
1382
+ "step": 229
1383
+ },
1384
+ {
1385
+ "epoch": 2.11,
1386
+ "learning_rate": 4.88727297347654e-06,
1387
+ "loss": 0.1156,
1388
+ "step": 230
1389
+ },
1390
+ {
1391
+ "epoch": 2.12,
1392
+ "learning_rate": 4.8497072032518274e-06,
1393
+ "loss": 0.1201,
1394
+ "step": 231
1395
+ },
1396
+ {
1397
+ "epoch": 2.13,
1398
+ "learning_rate": 4.8121499225503974e-06,
1399
+ "loss": 0.1053,
1400
+ "step": 232
1401
+ },
1402
+ {
1403
+ "epoch": 2.14,
1404
+ "learning_rate": 4.774603252853889e-06,
1405
+ "loss": 0.1344,
1406
+ "step": 233
1407
+ },
1408
+ {
1409
+ "epoch": 2.15,
1410
+ "learning_rate": 4.737069315044562e-06,
1411
+ "loss": 0.1171,
1412
+ "step": 234
1413
+ },
1414
+ {
1415
+ "epoch": 2.16,
1416
+ "learning_rate": 4.699550229285499e-06,
1417
+ "loss": 0.0906,
1418
+ "step": 235
1419
+ },
1420
+ {
1421
+ "epoch": 2.17,
1422
+ "learning_rate": 4.662048114900837e-06,
1423
+ "loss": 0.1313,
1424
+ "step": 236
1425
+ },
1426
+ {
1427
+ "epoch": 2.17,
1428
+ "learning_rate": 4.624565090256059e-06,
1429
+ "loss": 0.1096,
1430
+ "step": 237
1431
+ },
1432
+ {
1433
+ "epoch": 2.18,
1434
+ "learning_rate": 4.587103272638339e-06,
1435
+ "loss": 0.1185,
1436
+ "step": 238
1437
+ },
1438
+ {
1439
+ "epoch": 2.19,
1440
+ "learning_rate": 4.549664778136933e-06,
1441
+ "loss": 0.1372,
1442
+ "step": 239
1443
+ },
1444
+ {
1445
+ "epoch": 2.2,
1446
+ "learning_rate": 4.512251721523659e-06,
1447
+ "loss": 0.1042,
1448
+ "step": 240
1449
+ },
1450
+ {
1451
+ "epoch": 2.21,
1452
+ "learning_rate": 4.4748662161334335e-06,
1453
+ "loss": 0.1185,
1454
+ "step": 241
1455
+ },
1456
+ {
1457
+ "epoch": 2.22,
1458
+ "learning_rate": 4.437510373744897e-06,
1459
+ "loss": 0.1448,
1460
+ "step": 242
1461
+ },
1462
+ {
1463
+ "epoch": 2.23,
1464
+ "learning_rate": 4.400186304461136e-06,
1465
+ "loss": 0.113,
1466
+ "step": 243
1467
+ },
1468
+ {
1469
+ "epoch": 2.24,
1470
+ "learning_rate": 4.362896116590475e-06,
1471
+ "loss": 0.1211,
1472
+ "step": 244
1473
+ },
1474
+ {
1475
+ "epoch": 2.25,
1476
+ "learning_rate": 4.325641916527405e-06,
1477
+ "loss": 0.132,
1478
+ "step": 245
1479
+ },
1480
+ {
1481
+ "epoch": 2.26,
1482
+ "learning_rate": 4.2884258086335755e-06,
1483
+ "loss": 0.1105,
1484
+ "step": 246
1485
+ },
1486
+ {
1487
+ "epoch": 2.27,
1488
+ "learning_rate": 4.25124989511895e-06,
1489
+ "loss": 0.0987,
1490
+ "step": 247
1491
+ },
1492
+ {
1493
+ "epoch": 2.28,
1494
+ "learning_rate": 4.214116275923051e-06,
1495
+ "loss": 0.1197,
1496
+ "step": 248
1497
+ },
1498
+ {
1499
+ "epoch": 2.28,
1500
+ "learning_rate": 4.17702704859633e-06,
1501
+ "loss": 0.1334,
1502
+ "step": 249
1503
+ },
1504
+ {
1505
+ "epoch": 2.29,
1506
+ "learning_rate": 4.1399843081817085e-06,
1507
+ "loss": 0.1468,
1508
+ "step": 250
1509
+ },
1510
+ {
1511
+ "epoch": 2.3,
1512
+ "learning_rate": 4.1029901470962105e-06,
1513
+ "loss": 0.1161,
1514
+ "step": 251
1515
+ },
1516
+ {
1517
+ "epoch": 2.31,
1518
+ "learning_rate": 4.066046655012786e-06,
1519
+ "loss": 0.1106,
1520
+ "step": 252
1521
+ },
1522
+ {
1523
+ "epoch": 2.32,
1524
+ "learning_rate": 4.029155918742268e-06,
1525
+ "loss": 0.1234,
1526
+ "step": 253
1527
+ },
1528
+ {
1529
+ "epoch": 2.33,
1530
+ "learning_rate": 3.992320022115492e-06,
1531
+ "loss": 0.1261,
1532
+ "step": 254
1533
+ },
1534
+ {
1535
+ "epoch": 2.34,
1536
+ "learning_rate": 3.955541045865591e-06,
1537
+ "loss": 0.1011,
1538
+ "step": 255
1539
+ },
1540
+ {
1541
+ "epoch": 2.35,
1542
+ "learning_rate": 3.918821067510464e-06,
1543
+ "loss": 0.1058,
1544
+ "step": 256
1545
+ },
1546
+ {
1547
+ "epoch": 2.36,
1548
+ "learning_rate": 3.882162161235421e-06,
1549
+ "loss": 0.1002,
1550
+ "step": 257
1551
+ },
1552
+ {
1553
+ "epoch": 2.37,
1554
+ "learning_rate": 3.845566397776022e-06,
1555
+ "loss": 0.1023,
1556
+ "step": 258
1557
+ },
1558
+ {
1559
+ "epoch": 2.38,
1560
+ "learning_rate": 3.8090358443010993e-06,
1561
+ "loss": 0.1174,
1562
+ "step": 259
1563
+ },
1564
+ {
1565
+ "epoch": 2.39,
1566
+ "learning_rate": 3.7725725642960047e-06,
1567
+ "loss": 0.111,
1568
+ "step": 260
1569
+ },
1570
+ {
1571
+ "epoch": 2.39,
1572
+ "learning_rate": 3.7361786174460414e-06,
1573
+ "loss": 0.0948,
1574
+ "step": 261
1575
+ },
1576
+ {
1577
+ "epoch": 2.4,
1578
+ "learning_rate": 3.6998560595201188e-06,
1579
+ "loss": 0.0972,
1580
+ "step": 262
1581
+ },
1582
+ {
1583
+ "epoch": 2.41,
1584
+ "learning_rate": 3.6636069422546363e-06,
1585
+ "loss": 0.0982,
1586
+ "step": 263
1587
+ },
1588
+ {
1589
+ "epoch": 2.42,
1590
+ "learning_rate": 3.627433313237576e-06,
1591
+ "loss": 0.1374,
1592
+ "step": 264
1593
+ },
1594
+ {
1595
+ "epoch": 2.43,
1596
+ "learning_rate": 3.5913372157928515e-06,
1597
+ "loss": 0.0988,
1598
+ "step": 265
1599
+ },
1600
+ {
1601
+ "epoch": 2.44,
1602
+ "learning_rate": 3.555320688864889e-06,
1603
+ "loss": 0.1281,
1604
+ "step": 266
1605
+ },
1606
+ {
1607
+ "epoch": 2.45,
1608
+ "learning_rate": 3.519385766903442e-06,
1609
+ "loss": 0.1273,
1610
+ "step": 267
1611
+ },
1612
+ {
1613
+ "epoch": 2.46,
1614
+ "learning_rate": 3.483534479748688e-06,
1615
+ "loss": 0.1256,
1616
+ "step": 268
1617
+ },
1618
+ {
1619
+ "epoch": 2.47,
1620
+ "learning_rate": 3.447768852516554e-06,
1621
+ "loss": 0.1131,
1622
+ "step": 269
1623
+ },
1624
+ {
1625
+ "epoch": 2.48,
1626
+ "learning_rate": 3.4120909054843375e-06,
1627
+ "loss": 0.0974,
1628
+ "step": 270
1629
+ },
1630
+ {
1631
+ "epoch": 2.49,
1632
+ "learning_rate": 3.3765026539765832e-06,
1633
+ "loss": 0.1233,
1634
+ "step": 271
1635
+ },
1636
+ {
1637
+ "epoch": 2.5,
1638
+ "learning_rate": 3.3410061082512422e-06,
1639
+ "loss": 0.1175,
1640
+ "step": 272
1641
+ },
1642
+ {
1643
+ "epoch": 2.5,
1644
+ "learning_rate": 3.3056032733861188e-06,
1645
+ "loss": 0.1178,
1646
+ "step": 273
1647
+ },
1648
+ {
1649
+ "epoch": 2.51,
1650
+ "learning_rate": 3.2702961491656197e-06,
1651
+ "loss": 0.11,
1652
+ "step": 274
1653
+ },
1654
+ {
1655
+ "epoch": 2.52,
1656
+ "learning_rate": 3.2350867299677802e-06,
1657
+ "loss": 0.1091,
1658
+ "step": 275
1659
+ },
1660
+ {
1661
+ "epoch": 2.53,
1662
+ "learning_rate": 3.1999770046516198e-06,
1663
+ "loss": 0.1146,
1664
+ "step": 276
1665
+ },
1666
+ {
1667
+ "epoch": 2.54,
1668
+ "learning_rate": 3.164968956444791e-06,
1669
+ "loss": 0.0945,
1670
+ "step": 277
1671
+ },
1672
+ {
1673
+ "epoch": 2.55,
1674
+ "learning_rate": 3.130064562831553e-06,
1675
+ "loss": 0.1216,
1676
+ "step": 278
1677
+ },
1678
+ {
1679
+ "epoch": 2.56,
1680
+ "learning_rate": 3.0952657954410792e-06,
1681
+ "loss": 0.1273,
1682
+ "step": 279
1683
+ },
1684
+ {
1685
+ "epoch": 2.57,
1686
+ "learning_rate": 3.0605746199360755e-06,
1687
+ "loss": 0.1161,
1688
+ "step": 280
1689
+ },
1690
+ {
1691
+ "epoch": 2.58,
1692
+ "learning_rate": 3.0259929959017585e-06,
1693
+ "loss": 0.1395,
1694
+ "step": 281
1695
+ },
1696
+ {
1697
+ "epoch": 2.59,
1698
+ "learning_rate": 2.991522876735154e-06,
1699
+ "loss": 0.1224,
1700
+ "step": 282
1701
+ },
1702
+ {
1703
+ "epoch": 2.6,
1704
+ "learning_rate": 2.95716620953476e-06,
1705
+ "loss": 0.1052,
1706
+ "step": 283
1707
+ },
1708
+ {
1709
+ "epoch": 2.61,
1710
+ "learning_rate": 2.9229249349905686e-06,
1711
+ "loss": 0.1383,
1712
+ "step": 284
1713
+ },
1714
+ {
1715
+ "epoch": 2.61,
1716
+ "learning_rate": 2.8888009872744332e-06,
1717
+ "loss": 0.0985,
1718
+ "step": 285
1719
+ },
1720
+ {
1721
+ "epoch": 2.62,
1722
+ "learning_rate": 2.8547962939308187e-06,
1723
+ "loss": 0.1164,
1724
+ "step": 286
1725
+ },
1726
+ {
1727
+ "epoch": 2.63,
1728
+ "learning_rate": 2.8209127757679246e-06,
1729
+ "loss": 0.095,
1730
+ "step": 287
1731
+ },
1732
+ {
1733
+ "epoch": 2.64,
1734
+ "learning_rate": 2.787152346749173e-06,
1735
+ "loss": 0.1308,
1736
+ "step": 288
1737
+ },
1738
+ {
1739
+ "epoch": 2.65,
1740
+ "learning_rate": 2.7535169138851124e-06,
1741
+ "loss": 0.1336,
1742
+ "step": 289
1743
+ },
1744
+ {
1745
+ "epoch": 2.66,
1746
+ "learning_rate": 2.720008377125682e-06,
1747
+ "loss": 0.0952,
1748
+ "step": 290
1749
+ },
1750
+ {
1751
+ "epoch": 2.67,
1752
+ "learning_rate": 2.686628629252899e-06,
1753
+ "loss": 0.134,
1754
+ "step": 291
1755
+ },
1756
+ {
1757
+ "epoch": 2.68,
1758
+ "learning_rate": 2.6533795557739407e-06,
1759
+ "loss": 0.1008,
1760
+ "step": 292
1761
+ },
1762
+ {
1763
+ "epoch": 2.69,
1764
+ "learning_rate": 2.6202630348146323e-06,
1765
+ "loss": 0.0938,
1766
+ "step": 293
1767
+ },
1768
+ {
1769
+ "epoch": 2.7,
1770
+ "learning_rate": 2.5872809370133704e-06,
1771
+ "loss": 0.1146,
1772
+ "step": 294
1773
+ },
1774
+ {
1775
+ "epoch": 2.71,
1776
+ "learning_rate": 2.5544351254154407e-06,
1777
+ "loss": 0.0909,
1778
+ "step": 295
1779
+ },
1780
+ {
1781
+ "epoch": 2.72,
1782
+ "learning_rate": 2.5217274553677975e-06,
1783
+ "loss": 0.1054,
1784
+ "step": 296
1785
+ },
1786
+ {
1787
+ "epoch": 2.72,
1788
+ "learning_rate": 2.489159774414252e-06,
1789
+ "loss": 0.0909,
1790
+ "step": 297
1791
+ },
1792
+ {
1793
+ "epoch": 2.73,
1794
+ "learning_rate": 2.4567339221911086e-06,
1795
+ "loss": 0.0918,
1796
+ "step": 298
1797
+ },
1798
+ {
1799
+ "epoch": 2.74,
1800
+ "learning_rate": 2.424451730323261e-06,
1801
+ "loss": 0.1043,
1802
+ "step": 299
1803
+ },
1804
+ {
1805
+ "epoch": 2.75,
1806
+ "learning_rate": 2.3923150223207176e-06,
1807
+ "loss": 0.1139,
1808
+ "step": 300
1809
+ },
1810
+ {
1811
+ "epoch": 2.76,
1812
+ "learning_rate": 2.3603256134756066e-06,
1813
+ "loss": 0.0958,
1814
+ "step": 301
1815
+ },
1816
+ {
1817
+ "epoch": 2.77,
1818
+ "learning_rate": 2.328485310759635e-06,
1819
+ "loss": 0.1115,
1820
+ "step": 302
1821
+ },
1822
+ {
1823
+ "epoch": 2.78,
1824
+ "learning_rate": 2.296795912722014e-06,
1825
+ "loss": 0.092,
1826
+ "step": 303
1827
+ },
1828
+ {
1829
+ "epoch": 2.79,
1830
+ "learning_rate": 2.265259209387867e-06,
1831
+ "loss": 0.1011,
1832
+ "step": 304
1833
+ },
1834
+ {
1835
+ "epoch": 2.8,
1836
+ "learning_rate": 2.2338769821571225e-06,
1837
+ "loss": 0.106,
1838
+ "step": 305
1839
+ },
1840
+ {
1841
+ "epoch": 2.81,
1842
+ "learning_rate": 2.202651003703885e-06,
1843
+ "loss": 0.1053,
1844
+ "step": 306
1845
+ },
1846
+ {
1847
+ "epoch": 2.82,
1848
+ "learning_rate": 2.1715830378763025e-06,
1849
+ "loss": 0.1314,
1850
+ "step": 307
1851
+ },
1852
+ {
1853
+ "epoch": 2.83,
1854
+ "learning_rate": 2.140674839596931e-06,
1855
+ "loss": 0.1252,
1856
+ "step": 308
1857
+ },
1858
+ {
1859
+ "epoch": 2.83,
1860
+ "learning_rate": 2.109928154763606e-06,
1861
+ "loss": 0.1032,
1862
+ "step": 309
1863
+ },
1864
+ {
1865
+ "epoch": 2.84,
1866
+ "learning_rate": 2.0793447201508288e-06,
1867
+ "loss": 0.0911,
1868
+ "step": 310
1869
+ },
1870
+ {
1871
+ "epoch": 2.85,
1872
+ "learning_rate": 2.0489262633116536e-06,
1873
+ "loss": 0.0865,
1874
+ "step": 311
1875
+ },
1876
+ {
1877
+ "epoch": 2.86,
1878
+ "learning_rate": 2.01867450248011e-06,
1879
+ "loss": 0.1087,
1880
+ "step": 312
1881
+ },
1882
+ {
1883
+ "epoch": 2.87,
1884
+ "learning_rate": 1.9885911464741413e-06,
1885
+ "loss": 0.1136,
1886
+ "step": 313
1887
+ },
1888
+ {
1889
+ "epoch": 2.88,
1890
+ "learning_rate": 1.9586778945990785e-06,
1891
+ "loss": 0.0873,
1892
+ "step": 314
1893
+ },
1894
+ {
1895
+ "epoch": 2.89,
1896
+ "learning_rate": 1.928936436551661e-06,
1897
+ "loss": 0.1155,
1898
+ "step": 315
1899
+ },
1900
+ {
1901
+ "epoch": 2.9,
1902
+ "learning_rate": 1.8993684523245842e-06,
1903
+ "loss": 0.117,
1904
+ "step": 316
1905
+ },
1906
+ {
1907
+ "epoch": 2.91,
1908
+ "learning_rate": 1.8699756121115997e-06,
1909
+ "loss": 0.1083,
1910
+ "step": 317
1911
+ },
1912
+ {
1913
+ "epoch": 2.92,
1914
+ "learning_rate": 1.8407595762131814e-06,
1915
+ "loss": 0.1099,
1916
+ "step": 318
1917
+ },
1918
+ {
1919
+ "epoch": 2.93,
1920
+ "learning_rate": 1.811721994942731e-06,
1921
+ "loss": 0.1183,
1922
+ "step": 319
1923
+ },
1924
+ {
1925
+ "epoch": 2.94,
1926
+ "learning_rate": 1.7828645085333645e-06,
1927
+ "loss": 0.1168,
1928
+ "step": 320
1929
+ },
1930
+ {
1931
+ "epoch": 2.94,
1932
+ "learning_rate": 1.7541887470452606e-06,
1933
+ "loss": 0.0985,
1934
+ "step": 321
1935
+ },
1936
+ {
1937
+ "epoch": 2.95,
1938
+ "learning_rate": 1.7256963302735752e-06,
1939
+ "loss": 0.1012,
1940
+ "step": 322
1941
+ },
1942
+ {
1943
+ "epoch": 2.96,
1944
+ "learning_rate": 1.6973888676569594e-06,
1945
+ "loss": 0.1107,
1946
+ "step": 323
1947
+ },
1948
+ {
1949
+ "epoch": 2.97,
1950
+ "learning_rate": 1.6692679581866334e-06,
1951
+ "loss": 0.1301,
1952
+ "step": 324
1953
+ },
1954
+ {
1955
+ "epoch": 2.98,
1956
+ "learning_rate": 1.6413351903160763e-06,
1957
+ "loss": 0.1022,
1958
+ "step": 325
1959
+ },
1960
+ {
1961
+ "epoch": 2.99,
1962
+ "learning_rate": 1.6135921418712959e-06,
1963
+ "loss": 0.1213,
1964
+ "step": 326
1965
+ },
1966
+ {
1967
+ "epoch": 3.0,
1968
+ "learning_rate": 1.5860403799616951e-06,
1969
+ "loss": 0.1093,
1970
+ "step": 327
1971
+ },
1972
+ {
1973
+ "epoch": 3.01,
1974
+ "learning_rate": 1.5586814608915673e-06,
1975
+ "loss": 0.0554,
1976
+ "step": 328
1977
+ },
1978
+ {
1979
+ "epoch": 3.02,
1980
+ "learning_rate": 1.5315169300721694e-06,
1981
+ "loss": 0.084,
1982
+ "step": 329
1983
+ },
1984
+ {
1985
+ "epoch": 3.03,
1986
+ "learning_rate": 1.5045483219344387e-06,
1987
+ "loss": 0.0741,
1988
+ "step": 330
1989
+ }
1990
+ ],
1991
+ "max_steps": 436,
1992
+ "num_train_epochs": 4,
1993
+ "total_flos": 1.2638375430617825e+18,
1994
+ "trial_name": null,
1995
+ "trial_params": null
1996
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:230e2de2b3c1ee9465cc7ecb9d3d138cecd7230b0261d72773fc5adf1461a099
3
+ size 4091