Brandon B. May
commited on
Commit
•
8693c44
1
Parent(s):
bb343f6
Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,17 @@ models and prior robot learning models using less training data and smaller mode
|
|
17 |
|
18 |
The `theia-tiny-patch16-224-cddsv` model, uses [DeiT-Tiny](https://huggingface.co/facebook/deit-tiny-patch16-224) as a backbone, and simulatenously distills [CLIP](https://github.com/openai/CLIP), [Depth Anything](https://github.com/LiheYoung/Depth-Anything), [DINOv2](https://github.com/facebookresearch/dinov2), [Segment Anything](https://github.com/facebookresearch/segment-anything) and [ViT](https://github.com/google-research/vision_transformer). For more information on usage, please visit the [Theia repository](https://github.com/bdaiinstitute/theia/tree/main).
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
## Usage
|
21 |
|
22 |
The pre-trained model weights and code released with Theia are available for use under [The AI Institute License](https://raw.githubusercontent.com/bdaiinstitute/theia/main/LICENSE), reproduced in full below:
|
|
|
17 |
|
18 |
The `theia-tiny-patch16-224-cddsv` model, uses [DeiT-Tiny](https://huggingface.co/facebook/deit-tiny-patch16-224) as a backbone, and simulatenously distills [CLIP](https://github.com/openai/CLIP), [Depth Anything](https://github.com/LiheYoung/Depth-Anything), [DINOv2](https://github.com/facebookresearch/dinov2), [Segment Anything](https://github.com/facebookresearch/segment-anything) and [ViT](https://github.com/google-research/vision_transformer). For more information on usage, please visit the [Theia repository](https://github.com/bdaiinstitute/theia/tree/main).
|
19 |
|
20 |
+
## Citation
|
21 |
+
If you use Theia in your research, please use the following BibTeX entry:
|
22 |
+
```bibtex
|
23 |
+
@article{shang2024theia,
|
24 |
+
author = {Shang, Jinghuan and Schmeckpeper, Karl and May, Brandon B. and Minniti, Maria Vittoria and Kelestemur, Tarik and Watkins, David and Herlant, Laura},
|
25 |
+
title = {Theia: Distilling Diverse Vision Foundation Models for Robot Learning},
|
26 |
+
journal = {arXiv},
|
27 |
+
year = {2024},
|
28 |
+
}
|
29 |
+
```
|
30 |
+
|
31 |
## Usage
|
32 |
|
33 |
The pre-trained model weights and code released with Theia are available for use under [The AI Institute License](https://raw.githubusercontent.com/bdaiinstitute/theia/main/LICENSE), reproduced in full below:
|