kinakomochi commited on
Commit
a2339dc
1 Parent(s): 8a35408

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -16
README.md CHANGED
@@ -4,55 +4,51 @@ tags:
4
  - merge
5
  - mergekit
6
  - lazymergekit
7
- - /content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama-3-ELYZA-JP-8B_2371007997
8
  ---
9
 
10
  # Evolved-Llama3-8B
11
 
12
  Evolved-Llama3-8B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
13
- * [/content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama-3-ELYZA-JP-8B_2371007997](https://huggingface.co//content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama-3-ELYZA-JP-8B_2371007997)
14
- * [/content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama-3-ELYZA-JP-8B_2371007997](https://huggingface.co//content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama-3-ELYZA-JP-8B_2371007997)
15
- * [/content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama-3-ELYZA-JP-8B_2371007997](https://huggingface.co//content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama-3-ELYZA-JP-8B_2371007997)
16
- * [/content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama-3-ELYZA-JP-8B_2371007997](https://huggingface.co//content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama-3-ELYZA-JP-8B_2371007997)
17
-
18
  ## 🧩 Configuration
19
 
20
- ` ``yaml
21
  slices:
22
  - sources:
23
  - layer_range: [0, 8]
24
- model: /content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama-3-ELYZA-JP-8B_2371007997
25
  parameters:
26
  weight: 0.2924041594566723
27
  - layer_range: [0, 8]
28
- model: /content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama3-ChatQA-1.5-8B_376305873
29
  parameters:
30
  weight: 1.0002597402802504
31
  - sources:
32
  - layer_range: [8, 16]
33
- model: /content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama-3-ELYZA-JP-8B_2371007997
34
  parameters:
35
  weight: 0.5303090111436538
36
  - layer_range: [8, 16]
37
- model: /content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama3-ChatQA-1.5-8B_376305873
38
  parameters:
39
  weight: 0.6266010695928661
40
  - sources:
41
  - layer_range: [16, 24]
42
- model: /content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama-3-ELYZA-JP-8B_2371007997
43
  parameters:
44
  weight: 0.3491957124910876
45
  - layer_range: [16, 24]
46
- model: /content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama3-ChatQA-1.5-8B_376305873
47
  parameters:
48
  weight: 0.44349113433925463
49
  - sources:
50
  - layer_range: [24, 32]
51
- model: /content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama-3-ELYZA-JP-8B_2371007997
52
  parameters:
53
  weight: 0.38380980665908515
54
  - layer_range: [24, 32]
55
- model: /content/drive/MyDrive/Merge/evol_merge_storage/input_models/Llama3-ChatQA-1.5-8B_376305873
56
  parameters:
57
  weight: 0.5068229626895051
58
- `` `
 
4
  - merge
5
  - mergekit
6
  - lazymergekit
 
7
  ---
8
 
9
  # Evolved-Llama3-8B
10
 
11
  Evolved-Llama3-8B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
12
+ * elyza/Llama-3-ELYZA-JP-8B
13
+ * nvidia/Llama3-ChatQA-1.5-8B
 
 
 
14
  ## 🧩 Configuration
15
 
16
+ ```yaml
17
  slices:
18
  - sources:
19
  - layer_range: [0, 8]
20
+ model: Llama-3-ELYZA-JP-8B_2371007997
21
  parameters:
22
  weight: 0.2924041594566723
23
  - layer_range: [0, 8]
24
+ model: Llama3-ChatQA-1.5-8B_376305873
25
  parameters:
26
  weight: 1.0002597402802504
27
  - sources:
28
  - layer_range: [8, 16]
29
+ model: Llama-3-ELYZA-JP-8B_2371007997
30
  parameters:
31
  weight: 0.5303090111436538
32
  - layer_range: [8, 16]
33
+ model: Llama3-ChatQA-1.5-8B_376305873
34
  parameters:
35
  weight: 0.6266010695928661
36
  - sources:
37
  - layer_range: [16, 24]
38
+ model: Llama-3-ELYZA-JP-8B_2371007997
39
  parameters:
40
  weight: 0.3491957124910876
41
  - layer_range: [16, 24]
42
+ model: Llama3-ChatQA-1.5-8B_376305873
43
  parameters:
44
  weight: 0.44349113433925463
45
  - sources:
46
  - layer_range: [24, 32]
47
+ model: Llama-3-ELYZA-JP-8B_2371007997
48
  parameters:
49
  weight: 0.38380980665908515
50
  - layer_range: [24, 32]
51
+ model: Llama3-ChatQA-1.5-8B_376305873
52
  parameters:
53
  weight: 0.5068229626895051
54
+ ```