Transformers
GGUF
llama-factory
full
diffusion
Inference Endpoints
aashish1904 commited on
Commit
4003d36
1 Parent(s): fbead71

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +59 -0
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ base_model:
6
+ - meta-llama/Llama-2-7b-hf
7
+ tags:
8
+ - llama-factory
9
+ - full
10
+ - diffusion
11
+ model-index:
12
+ - name: diffullama
13
+ results: []
14
+ license: apache-2.0
15
+ datasets:
16
+ - bigcode/starcoderdata
17
+ - cerebras/SlimPajama-627B
18
+
19
+ ---
20
+
21
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
22
+
23
+
24
+ # QuantFactory/diffullama-GGUF
25
+ This is quantized version of [diffusionfamily/diffullama](https://huggingface.co/diffusionfamily/diffullama) created using llama.cpp
26
+
27
+ # Original Model Card
28
+
29
+
30
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
+ should probably proofread and complete it, then remove this comment. -->
32
+
33
+ # diffullama
34
+
35
+ This model is a fine-tuned version of [llama2].
36
+
37
+ ## Model description
38
+
39
+ Details and model loading can be seen [https://github.com/HKUNLP/DiffuLLaMA](https://github.com/HKUNLP/DiffuLLaMA).
40
+
41
+
42
+ ### Framework versions
43
+
44
+ - Transformers 4.44.2
45
+ - Pytorch 2.1.1+cu121
46
+ - Datasets 2.21.0
47
+ - Tokenizers 0.19.1
48
+
49
+ ```
50
+ @misc{gong2024scalingdiffusionlanguagemodels,
51
+ title={Scaling Diffusion Language Models via Adaptation from Autoregressive Models},
52
+ author={Shansan Gong and Shivam Agarwal and Yizhe Zhang and Jiacheng Ye and Lin Zheng and Mukai Li and Chenxin An and Peilin Zhao and Wei Bi and Jiawei Han and Hao Peng and Lingpeng Kong},
53
+ year={2024},
54
+ eprint={2410.17891},
55
+ archivePrefix={arXiv},
56
+ primaryClass={cs.CL},
57
+ url={https://arxiv.org/abs/2410.17891},
58
+ }
59
+ ```