mradermacher commited on
Commit
e096570
1 Parent(s): 724ab5c

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -0
README.md CHANGED
@@ -1,3 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  <!-- ### convert_type: -->
2
  <!-- ### vocab_type: -->
3
  static quants of https://huggingface.co/jondurbin/bagel-20b-v04
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: internlm/internlm2-20b
3
+ datasets:
4
+ - ai2_arc
5
+ - allenai/ultrafeedback_binarized_cleaned
6
+ - argilla/distilabel-intel-orca-dpo-pairs
7
+ - jondurbin/airoboros-3.2
8
+ - codeparrot/apps
9
+ - facebook/belebele
10
+ - bluemoon-fandom-1-1-rp-cleaned
11
+ - boolq
12
+ - camel-ai/biology
13
+ - camel-ai/chemistry
14
+ - camel-ai/math
15
+ - camel-ai/physics
16
+ - jondurbin/contextual-dpo-v0.1
17
+ - jondurbin/gutenberg-dpo-v0.1
18
+ - jondurbin/py-dpo-v0.1
19
+ - jondurbin/truthy-dpo-v0.1
20
+ - LDJnr/Capybara
21
+ - jondurbin/cinematika-v0.1
22
+ - WizardLM/WizardLM_evol_instruct_70k
23
+ - glaiveai/glaive-function-calling-v2
24
+ - jondurbin/gutenberg-dpo-v0.1
25
+ - grimulkan/LimaRP-augmented
26
+ - lmsys/lmsys-chat-1m
27
+ - ParisNeo/lollms_aware_dataset
28
+ - TIGER-Lab/MathInstruct
29
+ - Muennighoff/natural-instructions
30
+ - openbookqa
31
+ - kingbri/PIPPA-shareGPT
32
+ - piqa
33
+ - Vezora/Tested-22k-Python-Alpaca
34
+ - ropes
35
+ - cakiki/rosetta-code
36
+ - Open-Orca/SlimOrca
37
+ - b-mc2/sql-create-context
38
+ - squad_v2
39
+ - mattpscott/airoboros-summarization
40
+ - migtissera/Synthia-v1.3
41
+ - unalignment/toxic-dpo-v0.2
42
+ - WhiteRabbitNeo/WRN-Chapter-1
43
+ - WhiteRabbitNeo/WRN-Chapter-2
44
+ - winogrande
45
+ exported_from: jondurbin/bagel-20b-v04
46
+ language:
47
+ - en
48
+ library_name: transformers
49
+ license: other
50
+ license_link: https://huggingface.co/internlm/internlm2-20b#open-source-license
51
+ license_name: internlm2-20b
52
+ quantized_by: mradermacher
53
+ ---
54
+ ## About
55
+
56
  <!-- ### convert_type: -->
57
  <!-- ### vocab_type: -->
58
  static quants of https://huggingface.co/jondurbin/bagel-20b-v04
59
+
60
+
61
+ <!-- provided-files -->
62
+ weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
63
+ ## Usage
64
+
65
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
66
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
67
+ more details, including on how to concatenate multi-part files.
68
+
69
+ ## Provided Quants
70
+
71
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
72
+
73
+ | Link | Type | Size/GB | Notes |
74
+ |:-----|:-----|--------:|:------|
75
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q2_K.gguf) | Q2_K | 8.3 | |
76
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.IQ3_XS.gguf) | IQ3_XS | 9.1 | |
77
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q3_K_S.gguf) | Q3_K_S | 9.5 | |
78
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.IQ3_S.gguf) | IQ3_S | 9.6 | beats Q3_K* |
79
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.IQ3_M.gguf) | IQ3_M | 9.9 | |
80
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q3_K_M.gguf) | Q3_K_M | 10.5 | lower quality |
81
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q3_K_L.gguf) | Q3_K_L | 11.3 | |
82
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.IQ4_XS.gguf) | IQ4_XS | 11.6 | |
83
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q4_K_S.gguf) | Q4_K_S | 12.2 | fast, recommended |
84
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q4_K_M.gguf) | Q4_K_M | 12.8 | fast, recommended |
85
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q5_K_S.gguf) | Q5_K_S | 14.5 | |
86
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q5_K_M.gguf) | Q5_K_M | 14.8 | |
87
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q6_K.gguf) | Q6_K | 17.1 | very good quality |
88
+ | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q8_0.gguf) | Q8_0 | 21.7 | fast, best quality |
89
+
90
+
91
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
92
+ types (lower is better):
93
+
94
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
95
+
96
+ And here are Artefact2's thoughts on the matter:
97
+ https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
98
+
99
+ ## Thanks
100
+
101
+ I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
102
+ me use its servers and providing upgrades to my workstation to enable
103
+ this work in my free time.
104
+
105
+ <!-- end -->