Update README.md
Browse files
README.md
CHANGED
@@ -53,13 +53,13 @@ datasets:
|
|
53 |
quantized_by: suparious
|
54 |
pipeline_tag: text-generation
|
55 |
---
|
56 |
-
## Exllama v2 Quantizations of Einstein-v5-v0.2-7B
|
57 |
|
58 |
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.16">turboderp's ExLlamaV2 v0.0.16</a> for quantization.
|
59 |
|
60 |
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
|
61 |
|
62 |
-
Original model:
|
63 |
|
64 |
Model Size: 7b
|
65 |
|
|
|
53 |
quantized_by: suparious
|
54 |
pipeline_tag: text-generation
|
55 |
---
|
56 |
+
## Exllama v2 Quantizations of Weyaxi/Einstein-v5-v0.2-7B
|
57 |
|
58 |
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.16">turboderp's ExLlamaV2 v0.0.16</a> for quantization.
|
59 |
|
60 |
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
|
61 |
|
62 |
+
Original model: Weyaxi/Einstein-v5-v0.2-7B
|
63 |
|
64 |
Model Size: 7b
|
65 |
|