--- library_name: transformers base_model: - meta-llama/Llama-2-7b-hf tags: - llama-factory - full - diffusion model-index: - name: diffullama results: [] license: apache-2.0 datasets: - bigcode/starcoderdata - cerebras/SlimPajama-627B --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/diffullama-GGUF This is quantized version of [diffusionfamily/diffullama](https://huggingface.co/diffusionfamily/diffullama) created using llama.cpp # Original Model Card # diffullama This model is a fine-tuned version of [llama2]. ## Model description Details and model loading can be seen [https://github.com/HKUNLP/DiffuLLaMA](https://github.com/HKUNLP/DiffuLLaMA). ### Framework versions - Transformers 4.44.2 - Pytorch 2.1.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1 ``` @misc{gong2024scalingdiffusionlanguagemodels, title={Scaling Diffusion Language Models via Adaptation from Autoregressive Models}, author={Shansan Gong and Shivam Agarwal and Yizhe Zhang and Jiacheng Ye and Lin Zheng and Mukai Li and Chenxin An and Peilin Zhao and Wei Bi and Jiawei Han and Hao Peng and Lingpeng Kong}, year={2024}, eprint={2410.17891}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.17891}, } ```