Update README.md
Browse files
README.md
CHANGED
@@ -2,11 +2,20 @@
|
|
2 |
base_model: HuggingFaceTB/SmolLM-360M-Instruct
|
3 |
---
|
4 |
|
5 |
-
|
6 |
-
|
7 |
|
8 |
-
|
|
|
|
|
9 |
|
10 |
```bash
|
|
|
|
|
|
|
11 |
./llama-cli -ngl 99 -co --temp 0 -p "How to build a city on Mars via calculating Aldrin-Cycler orbits?" -m Biggie_SmolLM_400M_q8_0.gguf -cnv -fa --keep -1
|
12 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
2 |
base_model: HuggingFaceTB/SmolLM-360M-Instruct
|
3 |
---
|
4 |
|
5 |
+
###Coherent Frankenstein of smolLm-0.36b upped to 0.4b
|
|
|
6 |
|
7 |
+
This took about 5 hours of semi-automated continuous merging to figure out the recipe.
|
8 |
+
Model is smarter, and UNTRAINED. Uploaded it for training. Yet it performs well as is even quantized to 8bit.
|
9 |
+
8bit gguf included for testing.
|
10 |
|
11 |
```bash
|
12 |
+
wget https://huggingface.co/nisten/Biggie-SmoLlm-0.4B/resolve/main/Biggie_SmolLM_400M_q8_0.gguf
|
13 |
+
```
|
14 |
+
```verilog
|
15 |
./llama-cli -ngl 99 -co --temp 0 -p "How to build a city on Mars via calculating Aldrin-Cycler orbits?" -m Biggie_SmolLM_400M_q8_0.gguf -cnv -fa --keep -1
|
16 |
+
```
|
17 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/XgF2kz3Zz0Jqz7BEVZ96h.png)
|
18 |
+
|
19 |
+
|
20 |
+
|
21 |
+
|