|
--- |
|
title: Video Depth using LOTUS Depth! |
|
emoji: 🚀 |
|
colorFrom: blue |
|
colorTo: indigo |
|
sdk: gradio |
|
sdk_version: 5.3.0 |
|
app_file: app.py |
|
pinned: false |
|
license: mit |
|
--- |
|
|
|
## Acknowledgments |
|
|
|
This application utilizes the depth estimation model from the following paper: |
|
|
|
**Citation:** |
|
|
|
He, Jing et al. *"Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction."* arXiv preprint arXiv:2409.18124 (2024). |
|
|
|
**BibTeX:** |
|
|
|
```bibtex |
|
@article{he2024lotus, |
|
title={Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction}, |
|
author={He, Jing and Li, Haodong and Yin, Wei and Liang, Yixun and Li, Leheng and Zhou, Kaiqiang and Liu, Hongbo and Liu, Bingbing and Chen, Ying-Cong}, |
|
journal={arXiv preprint arXiv:2409.18124}, |
|
year={2024} |
|
} |
|
|
|
You can find the original code and model on [GitHub](https://github.com/EnVision-Research/Lotus). |
|
|
|
--- |
|
|
|
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference |