MaziyarPanahi
commited on
Commit
•
b7942c8
1
Parent(s):
3b03f67
Upload folder using huggingface_hub (#1)
Browse files- 3c9a54bc88dd0a4aa88be71f719727642b3aca0286dd6493cbd5ff514bcafd12 (dc321feb1ca7562197eb93c32bb1077b59d6de7a)
- 1ab9f9f90c13930ab0e65465b3cbd7faf350b5c3dd01e13b22817fc1709e8e7a (1579a17d0ebe4fdff2451ef7ca2a4a63dc9bae9c)
- b928d44506a0853898647ebe3622a0e78de86ea534910907632931b2d0efb3cb (135abc4448b50189ea20fd53015da16e09b55fe4)
- d5a856f3ad076383d1b89b2d80298cd37722fc7949ad84cf1167d762ac53a849 (128b6828add6fec0ad0ce362d67677236f7473ec)
- .gitattributes +16 -0
- README.md +46 -0
- reader-lm-0.5b-GGUF_imatrix.dat +0 -0
- reader-lm-0.5b.IQ1_M.gguf +3 -0
- reader-lm-0.5b.IQ1_S.gguf +3 -0
- reader-lm-0.5b.IQ2_XS.gguf +3 -0
- reader-lm-0.5b.IQ3_XS.gguf +3 -0
- reader-lm-0.5b.IQ4_XS.gguf +3 -0
- reader-lm-0.5b.Q2_K.gguf +3 -0
- reader-lm-0.5b.Q3_K_L.gguf +3 -0
- reader-lm-0.5b.Q3_K_M.gguf +3 -0
- reader-lm-0.5b.Q3_K_S.gguf +3 -0
- reader-lm-0.5b.Q4_K_M.gguf +3 -0
- reader-lm-0.5b.Q4_K_S.gguf +3 -0
- reader-lm-0.5b.Q5_K_M.gguf +3 -0
- reader-lm-0.5b.Q5_K_S.gguf +3 -0
- reader-lm-0.5b.Q6_K.gguf +3 -0
- reader-lm-0.5b.Q8_0.gguf +3 -0
- reader-lm-0.5b.fp16.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,19 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
reader-lm-0.5b.IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
reader-lm-0.5b.IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
reader-lm-0.5b.IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
reader-lm-0.5b.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
reader-lm-0.5b.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
reader-lm-0.5b.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
reader-lm-0.5b.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
reader-lm-0.5b.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
reader-lm-0.5b.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
reader-lm-0.5b.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
reader-lm-0.5b.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
reader-lm-0.5b.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
48 |
+
reader-lm-0.5b.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
49 |
+
reader-lm-0.5b.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
50 |
+
reader-lm-0.5b.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
51 |
+
reader-lm-0.5b.fp16.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- quantized
|
4 |
+
- 2-bit
|
5 |
+
- 3-bit
|
6 |
+
- 4-bit
|
7 |
+
- 5-bit
|
8 |
+
- 6-bit
|
9 |
+
- 8-bit
|
10 |
+
- GGUF
|
11 |
+
- text-generation
|
12 |
+
- text-generation
|
13 |
+
model_name: reader-lm-0.5b-GGUF
|
14 |
+
base_model: jinaai/reader-lm-0.5b
|
15 |
+
inference: false
|
16 |
+
model_creator: jinaai
|
17 |
+
pipeline_tag: text-generation
|
18 |
+
quantized_by: MaziyarPanahi
|
19 |
+
---
|
20 |
+
# [arcee-train/reader-lm-0.5b-GGUF](https://huggingface.co/arcee-train/reader-lm-0.5b-GGUF)
|
21 |
+
- Model creator: [jinaai](https://huggingface.co/jinaai)
|
22 |
+
- Original model: [jinaai/reader-lm-0.5b](https://huggingface.co/jinaai/reader-lm-0.5b)
|
23 |
+
|
24 |
+
## Description
|
25 |
+
[arcee-train/reader-lm-0.5b-GGUF](https://huggingface.co/arcee-train/reader-lm-0.5b-GGUF) contains GGUF format model files for [jinaai/reader-lm-0.5b](https://huggingface.co/jinaai/reader-lm-0.5b).
|
26 |
+
|
27 |
+
### About GGUF
|
28 |
+
|
29 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
30 |
+
|
31 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
32 |
+
|
33 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
34 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
35 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
36 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
37 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
38 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
39 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
40 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
41 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
42 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
43 |
+
|
44 |
+
## Special thanks
|
45 |
+
|
46 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
reader-lm-0.5b-GGUF_imatrix.dat
ADDED
Binary file (989 kB). View file
|
|
reader-lm-0.5b.IQ1_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bb02eb9979d564bcce8053d3e36236b0c2f422e0bd3d191013c6eac5f106bd67
|
3 |
+
size 317971968
|
reader-lm-0.5b.IQ1_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:32f59c548a2ed80501e815d8d3cc1ea7ef1ce1d78762f9eceb8a1a94948ad889
|
3 |
+
size 315826944
|
reader-lm-0.5b.IQ2_XS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:29f38f0b4c793649dfbfd3431f0105bc83580cac013dcfbdac451093f4e30317
|
3 |
+
size 324407040
|
reader-lm-0.5b.IQ3_XS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:25b704bd826307cc030f167a40f7656a829740e805c5bbf2804beaea04b98775
|
3 |
+
size 338605056
|
reader-lm-0.5b.IQ4_XS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:22aeae3fc8845655578490e08014d1d9c24ccdbcefeb743d67c852e71130fe76
|
3 |
+
size 349400064
|
reader-lm-0.5b.Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7a5042ffd7638699b60fe598e2e4c7bf772598bba744e5d502953afef8938cb9
|
3 |
+
size 338605056
|
reader-lm-0.5b.Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f257eacc620d3915d7413d9616a3c8cfa4556319e101757bd19de2a4f67ec05f
|
3 |
+
size 369355776
|
reader-lm-0.5b.Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a4b905bc7520bfed8fbbbfc3d8c0470bde5753ebe274f04bf8db7aa1d0073855
|
3 |
+
size 355464192
|
reader-lm-0.5b.Q3_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4fb970c6203b4a95e914bd927c7fdfafd889679b1b93374139c03fb0c94fd36c
|
3 |
+
size 338260992
|
reader-lm-0.5b.Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:349112f099f168abb94452e0f9f94a04f49c47713aa9605b9c752c0e82ca200b
|
3 |
+
size 397805568
|
reader-lm-0.5b.Q4_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4524ab76a41ac1f4c2e6d81c1616f7a5973e96c80703382eebb661a75cd515bf
|
3 |
+
size 385469440
|
reader-lm-0.5b.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1509c1cc1ceb41c1587ca07ef7acda775cadbeeae98e82df4f10ad5c197baedb
|
3 |
+
size 420083712
|
reader-lm-0.5b.Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0acbedec4c73a12a74a958079b7ef4462b0ba44096e4a10d5603370b23f0720d
|
3 |
+
size 412707840
|
reader-lm-0.5b.Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0ae1925c5cb16129e0147a0cfa8152a717dfda223ea940aca1dfc0b681a2b81e
|
3 |
+
size 505734144
|
reader-lm-0.5b.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f788be5b00b2ba2617c551873aaa368e5b4aff823d882d3428f364a1a28d775c
|
3 |
+
size 531065856
|
reader-lm-0.5b.fp16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9e9eccdbb69de148719299f37f1585cbb1536453a3b6562c4d544c3631800e48
|
3 |
+
size 994154272
|