skaramcheti commited on
Commit
da55c88
1 Parent(s): d1b7313

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -21,7 +21,7 @@ The model takes language instructions and camera images as input and generates r
21
 
22
  All OpenVLA checkpoints, as well as our [training codebase](https://github.com/openvla/openvla) are released under an MIT License.
23
 
24
- For full details, please read [our paper](https://openvla.github.io/) and see [our project page](https://openvla.github.io/).
25
 
26
  ## Model Summary
27
 
@@ -34,7 +34,7 @@ For full details, please read [our paper](https://openvla.github.io/) and see [o
34
  + **Language Model**: Vicuña v1.5
35
  - **Pretraining Dataset:** [Open X-Embodiment](https://robotics-transformer-x.github.io/) -- specific component datasets can be found [here](https://github.com/openvla/openvla).
36
  - **Repository:** [https://github.com/openvla/openvla](https://github.com/openvla/openvla)
37
- - **Paper:** [OpenVLA: An Open-Source Vision-Language-Action Model](https://openvla.github.io/)
38
  - **Project Page & Videos:** [https://openvla.github.io/](https://openvla.github.io/)
39
 
40
  ## Uses
@@ -99,7 +99,7 @@ For more examples, including scripts for fine-tuning OpenVLA models on your own
99
  @article{kim24openvla,
100
  title={OpenVLA: An Open-Source Vision-Language-Action Model},
101
  author={{Moo Jin} Kim and Karl Pertsch and Siddharth Karamcheti and Ted Xiao and Ashwin Balakrishna and Suraj Nair and Rafael Rafailov and Ethan Foster and Grace Lam and Pannag Sanketi and Quan Vuong and Thomas Kollar and Benjamin Burchfiel and Russ Tedrake and Dorsa Sadigh and Sergey Levine and Percy Liang and Chelsea Finn},
102
- journal = {arXiv preprint},
103
  year={2024}
104
  }
105
  ```
 
21
 
22
  All OpenVLA checkpoints, as well as our [training codebase](https://github.com/openvla/openvla) are released under an MIT License.
23
 
24
+ For full details, please read [our paper](https://arxiv.org/abs/2406.09246) and see [our project page](https://openvla.github.io/).
25
 
26
  ## Model Summary
27
 
 
34
  + **Language Model**: Vicuña v1.5
35
  - **Pretraining Dataset:** [Open X-Embodiment](https://robotics-transformer-x.github.io/) -- specific component datasets can be found [here](https://github.com/openvla/openvla).
36
  - **Repository:** [https://github.com/openvla/openvla](https://github.com/openvla/openvla)
37
+ - **Paper:** [OpenVLA: An Open-Source Vision-Language-Action Model](https://arxiv.org/abs/2406.09246)
38
  - **Project Page & Videos:** [https://openvla.github.io/](https://openvla.github.io/)
39
 
40
  ## Uses
 
99
  @article{kim24openvla,
100
  title={OpenVLA: An Open-Source Vision-Language-Action Model},
101
  author={{Moo Jin} Kim and Karl Pertsch and Siddharth Karamcheti and Ted Xiao and Ashwin Balakrishna and Suraj Nair and Rafael Rafailov and Ethan Foster and Grace Lam and Pannag Sanketi and Quan Vuong and Thomas Kollar and Benjamin Burchfiel and Russ Tedrake and Dorsa Sadigh and Sergey Levine and Percy Liang and Chelsea Finn},
102
+ journal = {arXiv preprint arXiv:2406.09246},
103
  year={2024}
104
  }
105
  ```