Text Generation
Transformers
Safetensors
English
falcon_mamba
Eval Results
Inference Endpoints
ybelkada commited on
Commit
856badf
1 Parent(s): 6ddf4e2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -5
README.md CHANGED
@@ -379,12 +379,16 @@ Falcon-Mamba-7B was trained on an internal distributed training codebase, Gigatr
379
 
380
  # Citation
381
 
382
- *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
383
  ```
384
- @article{falconmamba,
385
- title={Falcon Mamba: The First Competitive Attention-free 7B Language Model},
386
- author={Zuo, Jingwei and Velikanov, Maksim and Rhaiem, Dhia Eddine and Chahed, Ilyas and Belkada, Younes and Kunsch, Guillaume and Hacid, Hakim},
387
- year={2024}
 
 
 
 
388
  }
389
  ```
390
 
 
379
 
380
  # Citation
381
 
382
+ You can use the following bibtex citation:
383
  ```
384
+ @misc{zuo2024falconmambacompetitiveattentionfree,
385
+ title={Falcon Mamba: The First Competitive Attention-free 7B Language Model},
386
+ author={Jingwei Zuo and Maksim Velikanov and Dhia Eddine Rhaiem and Ilyas Chahed and Younes Belkada and Guillaume Kunsch and Hakim Hacid},
387
+ year={2024},
388
+ eprint={2410.05355},
389
+ archivePrefix={arXiv},
390
+ primaryClass={cs.CL},
391
+ url={https://arxiv.org/abs/2410.05355},
392
  }
393
  ```
394