bpan commited on
Commit
a230198
1 Parent(s): d976fc3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -3
README.md CHANGED
@@ -1,3 +1,53 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - LLM Agent
5
+ - vision-language navigation
6
+ - synthetic data
7
+ ---
8
+ # LangNav: Language as a Perceptual Representation for Navigation
9
+
10
+ ## About LangNav
11
+ LangNav is an LLM-based navigation agent which performs multi-step navigation end-to-end via textual descriptions of the scene. The language-based perceptual representation makes LangNav more data efficient compared to VL models. With only a few language-based trajectories from a R2R environment, we use GPT-4 to efficiently generate a huge amount of synthetic training data. A smaller language model (LLaMA2-7B) can then be trained on these synthetic data and do the task. In this repo, we provide the inference code, the model, and the training dataset we used for the paper:
12
+
13
+ **LangNav: Language as a Perceptual Representation for Navigation**
14
+
15
+ [Bowen Pan](https://people.csail.mit.edu/bpan/), [Rameswar Panda](https://rpand002.github.io/), [SouYoung Jin](https://souyoungjin.github.io/), [Rogerio Feris](https://www.rogerioferis.org/), [Aude Oliva](http://olivalab.mit.edu/), [Phillip Isola](https://web.mit.edu/phillipi/) [Yoon Kim](https://people.csail.mit.edu/yoonkim/)
16
+
17
+ *NAACL 2024 (Findings)*
18
+
19
+ [[Paper](https://arxiv.org/pdf/2310.07889)][[GitHub](https://github.com/pbw-Berwin/LangNav)][[MIT News](https://news.mit.edu/2024/researchers-use-large-language-models-to-help-robots-navigate-0612)]
20
+
21
+ ## Prerequisites
22
+
23
+ We don't have to install the [Matterport3D Simulator](https://github.com/peteanderson80/Matterport3DSimulator) as we have pre-extracted the caption of each viewpoint.
24
+
25
+ But we still need to prepare the data in directories
26
+ - MP3D navigability graphs: `connectivity`
27
+ - Download the [connectivity maps [23.8MB]](https://github.com/peteanderson80/Matterport3DSimulator/tree/master/connectivity).
28
+ - R2R data: `data`
29
+ - Download the [R2R data [5.8MB]](https://github.com/peteanderson80/Matterport3DSimulator/tree/master/tasks/R2R/data).
30
+ - BLIP caption of the scene: `img_features`
31
+ - Download the [caption data [113MB]](https://drive.google.com/file/d/1X7F48q--15h8cdzA_A5NozHtRJdtJsI8/view?usp=drive_link) (r2r_blip_DETR_vis2text).
32
+
33
+
34
+ Install the [Pytorch-Transformers](https://github.com/huggingface/transformers).
35
+
36
+ ## Multi-step Navigation with Language-based Representation
37
+
38
+ Evaluate our [`LangNav-Sim2k-Llama2`](https://huggingface.co/bpan/LangNav-Sim2k-Llama2) model on the R2R datasets.
39
+ ```bash
40
+ sh eval_scripts/eval_langnav_2k_synthetic_100_real.sh
41
+ ```
42
+ We will also release the synthetic training dataset and the other models. Stay tuned!
43
+
44
+ ## Citation
45
+ If you use or discuss our LangNav, please cite our paper:
46
+ ```
47
+ @article{pan2023langnav,
48
+ title={Langnav: Language as a perceptual representation for navigation},
49
+ author={Pan, Bowen and Panda, Rameswar and Jin, SouYoung and Feris, Rogerio and Oliva, Aude and Isola, Phillip and Kim, Yoon},
50
+ journal={arXiv preprint arXiv:2310.07889},
51
+ year={2023}
52
+ }
53
+ ```