Joseph717171
commited on
Commit
•
5cdb018
1
Parent(s):
46874c4
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
Custom GGUF quants of Google’s [gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it), where the Output Tensors are quantized to Q8_0 or kept at F32 while the Embeddings are kept at F32. 🧠🔥🚀
|
2 |
|
3 |
-
Notes: Great SMOL LLM for on-device inference. 😋
|
|
|
1 |
Custom GGUF quants of Google’s [gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it), where the Output Tensors are quantized to Q8_0 or kept at F32 while the Embeddings are kept at F32. 🧠🔥🚀
|
2 |
|
3 |
+
Notes: Great SMOL LLM for on-device inference for mobile devices. 😋
|