Upload folder using huggingface_hub
Browse files- Meta-Llama-3-8B-Q2_K.gguf +2 -2
- Meta-Llama-3-8B-Q3_K_L.gguf +2 -2
- Meta-Llama-3-8B-Q3_K_M.gguf +2 -2
- Meta-Llama-3-8B-Q3_K_S.gguf +2 -2
- Meta-Llama-3-8B-Q4_0.gguf +2 -2
- Meta-Llama-3-8B-Q4_K_M.gguf +2 -2
- Meta-Llama-3-8B-Q4_K_S.gguf +2 -2
- Meta-Llama-3-8B-Q5_0.gguf +2 -2
- Meta-Llama-3-8B-Q5_K_M.gguf +2 -2
- Meta-Llama-3-8B-Q5_K_S.gguf +2 -2
- Meta-Llama-3-8B-Q6_K.gguf +2 -2
- Meta-Llama-3-8B-Q8_0.gguf +2 -2
- README.md +17 -20
Meta-Llama-3-8B-Q2_K.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f296d7867ecba4395032660e9b3777ae1a6ebf8c388f83e09b5528d59e22d5f2
|
3 |
+
size 3179131104
|
Meta-Llama-3-8B-Q3_K_L.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:af5c7a9d23c4f9690ff09f45a1b8d66d85dec1c2b7bf1218c683edfc76d456fa
|
3 |
+
size 4321956064
|
Meta-Llama-3-8B-Q3_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3db3b17fe5db4b761d4615ad219829f07cab1cf94962ed791ca21ffa5efa312c
|
3 |
+
size 4018917600
|
Meta-Llama-3-8B-Q3_K_S.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8a95bcdc14ef3c68298c02e08c05f6e35ce00921a9a735d4a8dd13940f9647b8
|
3 |
+
size 3664498912
|
Meta-Llama-3-8B-Q4_0.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6dfa2743042a82d00bf09dc7ab800531896a0724352009534cd55e07a6653ce7
|
3 |
+
size 4661211360
|
Meta-Llama-3-8B-Q4_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:619b435df6c376f56a7b7527dd0290e153b9d0a891a928b231c074d71ad6bdd8
|
3 |
+
size 4920733920
|
Meta-Llama-3-8B-Q4_K_S.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c785f18759c63f83e016f7474cc99dd27d29b0a78f45f08568148788884c710e
|
3 |
+
size 4692668640
|
Meta-Llama-3-8B-Q5_0.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f6376e421da3b6a1836510810e2b198a126900af041e258fdb0ffd07cb600c6d
|
3 |
+
size 5599293664
|
Meta-Llama-3-8B-Q5_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:815e00bc7a7f81ef5f879b0dea3c0dffbedae1342fd45d109bee8df4a39e080b
|
3 |
+
size 5732987104
|
Meta-Llama-3-8B-Q5_K_S.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7e7c94e77df8ef4bfc3a4e17fb7d6d78e16551019d87920eb70606462f5c4fa1
|
3 |
+
size 5599293664
|
Meta-Llama-3-8B-Q6_K.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6782f0b8e2d508010ee13c82d9dec9a371d148ee9f883d66ae2b12760d71ff3f
|
3 |
+
size 6596006112
|
Meta-Llama-3-8B-Q8_0.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b9292538bf28a613dbec93b6db0628fbfebe5a21c3991c3851a02f2d988780c8
|
3 |
+
size 8540770528
|
README.md
CHANGED
@@ -10,9 +10,8 @@ tags:
|
|
10 |
- llama-3
|
11 |
- TensorBlock
|
12 |
- GGUF
|
13 |
-
license:
|
14 |
-
|
15 |
-
license_link: LICENSE
|
16 |
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
|
17 |
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
|
18 |
\ use, reproduction, distribution and modification of the Llama Materials set forth\
|
@@ -177,7 +176,7 @@ extra_gated_fields:
|
|
177 |
extra_gated_description: The information you provide will be collected, stored, processed
|
178 |
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
|
179 |
extra_gated_button_content: Submit
|
180 |
-
base_model:
|
181 |
---
|
182 |
|
183 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
@@ -191,13 +190,12 @@ base_model: NousResearch/Meta-Llama-3-8B
|
|
191 |
</div>
|
192 |
</div>
|
193 |
|
194 |
-
##
|
195 |
|
196 |
-
This repo contains GGUF format model files for [
|
197 |
|
198 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
199 |
|
200 |
-
|
201 |
<div style="text-align: left; margin: 20px 0;">
|
202 |
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
|
203 |
Run them on the TensorBlock client using your local machine ↗
|
@@ -206,7 +204,6 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
206 |
|
207 |
## Prompt template
|
208 |
|
209 |
-
|
210 |
```
|
211 |
|
212 |
```
|
@@ -215,18 +212,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
215 |
|
216 |
| Filename | Quant type | File Size | Description |
|
217 |
| -------- | ---------- | --------- | ----------- |
|
218 |
-
| [Meta-Llama-3-8B-Q2_K.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q2_K.gguf) | Q2_K |
|
219 |
-
| [Meta-Llama-3-8B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q3_K_S.gguf) | Q3_K_S | 3.
|
220 |
-
| [Meta-Llama-3-8B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q3_K_M.gguf) | Q3_K_M |
|
221 |
-
| [Meta-Llama-3-8B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q3_K_L.gguf) | Q3_K_L | 4.
|
222 |
-
| [Meta-Llama-3-8B-Q4_0.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q4_0.gguf) | Q4_0 | 4.
|
223 |
-
| [Meta-Llama-3-8B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q4_K_S.gguf) | Q4_K_S | 4.
|
224 |
-
| [Meta-Llama-3-8B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q4_K_M.gguf) | Q4_K_M | 4.
|
225 |
-
| [Meta-Llama-3-8B-Q5_0.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q5_0.gguf) | Q5_0 | 5.
|
226 |
-
| [Meta-Llama-3-8B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q5_K_S.gguf) | Q5_K_S | 5.
|
227 |
-
| [Meta-Llama-3-8B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q5_K_M.gguf) | Q5_K_M | 5.
|
228 |
-
| [Meta-Llama-3-8B-Q6_K.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q6_K.gguf) | Q6_K | 6.
|
229 |
-
| [Meta-Llama-3-8B-Q8_0.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q8_0.gguf) | Q8_0 |
|
230 |
|
231 |
|
232 |
## Downloading instruction
|
|
|
10 |
- llama-3
|
11 |
- TensorBlock
|
12 |
- GGUF
|
13 |
+
license: llama3
|
14 |
+
new_version: meta-llama/Llama-3.1-8B
|
|
|
15 |
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
|
16 |
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
|
17 |
\ use, reproduction, distribution and modification of the Llama Materials set forth\
|
|
|
176 |
extra_gated_description: The information you provide will be collected, stored, processed
|
177 |
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
|
178 |
extra_gated_button_content: Submit
|
179 |
+
base_model: meta-llama/Meta-Llama-3-8B
|
180 |
---
|
181 |
|
182 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
|
|
190 |
</div>
|
191 |
</div>
|
192 |
|
193 |
+
## meta-llama/Meta-Llama-3-8B - GGUF
|
194 |
|
195 |
+
This repo contains GGUF format model files for [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B).
|
196 |
|
197 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
198 |
|
|
|
199 |
<div style="text-align: left; margin: 20px 0;">
|
200 |
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
|
201 |
Run them on the TensorBlock client using your local machine ↗
|
|
|
204 |
|
205 |
## Prompt template
|
206 |
|
|
|
207 |
```
|
208 |
|
209 |
```
|
|
|
212 |
|
213 |
| Filename | Quant type | File Size | Description |
|
214 |
| -------- | ---------- | --------- | ----------- |
|
215 |
+
| [Meta-Llama-3-8B-Q2_K.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
|
216 |
+
| [Meta-Llama-3-8B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
|
217 |
+
| [Meta-Llama-3-8B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
|
218 |
+
| [Meta-Llama-3-8B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
|
219 |
+
| [Meta-Llama-3-8B-Q4_0.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
220 |
+
| [Meta-Llama-3-8B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
|
221 |
+
| [Meta-Llama-3-8B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
|
222 |
+
| [Meta-Llama-3-8B-Q5_0.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
223 |
+
| [Meta-Llama-3-8B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
|
224 |
+
| [Meta-Llama-3-8B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
|
225 |
+
| [Meta-Llama-3-8B-Q6_K.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
|
226 |
+
| [Meta-Llama-3-8B-Q8_0.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
|
227 |
|
228 |
|
229 |
## Downloading instruction
|