CaraJ commited on
Commit
e073f08
β€’
1 Parent(s): 3df24ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -1
README.md CHANGED
@@ -37,4 +37,81 @@ dataset_info:
37
  splits:
38
  - name: end2end
39
  num_examples: 300
40
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  splits:
38
  - name: end2end
39
  num_examples: 300
40
+ ---
41
+ # MMSearch πŸ”₯: Benchmarking the Potential of Large Models as Multi-modal Search Engines
42
+
43
+ Official repository for the paper "[MMSearch: Benchmarking the Potential of Large Models as Multi-modal Search Engines]()".
44
+
45
+ 🌟 For more details, please refer to the project page with dataset exploration and visualization tools: [https://mmsearch.github.io/](https://mmsearch.github.io).
46
+
47
+
48
+ [[🌐 Webpage](https://mmsearch.github.io/)] [[πŸ“– Paper]()] [[πŸ€— Huggingface Dataset](https://huggingface.co/datasets/CaraJ/MMSearch)] [[πŸ† Leaderboard](https://mmsearch.github.io/#leaderboard)] [[πŸ” Visualization](https://huggingface.co/datasets/CaraJ/MMSearch/viewer)]
49
+
50
+
51
+ ## πŸ’₯ News
52
+
53
+ - **[2024.09.20]** πŸš€ We release the [arXiv paper]() and some data samples in the [visualizer](https://huggingface.co/datasets/CaraJ/MMSearch/viewer).
54
+
55
+ ## πŸ“Œ ToDo
56
+
57
+ - Coming soon: *Evaluation codes*
58
+
59
+ ## πŸ‘€ About MMSearch
60
+
61
+ The capabilities of **Large Multi-modal Models (LMMs)** in **multimodal search** remain insufficiently explored and evaluated. To fill the blank of a framework for LMM to conduct multimodal AI search engine, we first design a delicate pipeline **MMSearch-Engine** to facilitate **any LMM** to function as a multimodal AI search engine
62
+
63
+ <p align="center">
64
+ <img src="https://github.com/CaraJ7/MMSearch/raw/main/figs/fig1.png" width="75%"> <br>
65
+ The overview of <b>MMSearch-Engine</b>.
66
+ </p>
67
+
68
+ To further evaluate the potential of LMMs in the multimodal search domain, we introduce **MMSearch**, an all-around multimodal search benchmark designed for assessing the multimodal search performance. The benchmark contains 300 manually collected instances spanning 14 subfields, which involves no overlap with the current LMMs' training data, ensuring the correct answer can only be obtained within searching.
69
+
70
+ <p align="center">
71
+ <img src="https://raw.githubusercontent.com/CaraJ7/MMSearch/main/figs/fig2.png" width="60%"> <br>
72
+ The overview of <b>MMSearch</b>.
73
+ </p>
74
+
75
+ In addition, we propose a **step-wise evaluation strategy** to better understand the LMMs' searching capability. The models are evaluated by performing **three individual tasks (requery, rerank, and summarization)**, and **one challenging end-to-end task** with a complete searching process. The final score is weighted by the four tasks.
76
+
77
+ <p align="center">
78
+ <img src="https://raw.githubusercontent.com/CaraJ7/MMSearch/main/figs/fig3.png" width="90%"> <br>
79
+ Outline of Evaluation Tasks, Inputs, and Outputs.
80
+ </p>
81
+
82
+ An example of LMM input, output, and ground truth for four evaluation tasks is shown [here](figs/fig4.png).
83
+
84
+ ## πŸ† Leaderboard
85
+
86
+ ### Contributing to the Leaderboard
87
+
88
+ 🚨 The [Leaderboard](https://mmsearch.github.io/#leaderboard) is continuously being updated, welcoming the contribution of your excellent LMMs!
89
+
90
+
91
+ ## :white_check_mark: Citation
92
+
93
+ If you find **MMSearch** useful for your research and applications, please kindly cite using this BibTeX:
94
+
95
+ ```latex
96
+ @article{jiang2024mmsearch,
97
+ title={MMSearch: Benchmarking the Potential of Large Models as Multi-modal Search Engines},
98
+ author={Dongzhi Jiang, Renrui Zhang, Ziyu Guo, Yanmin Wu, Jiayi Lei, Pengshuo Qiu, Pan Lu, Zehui Chen, Guanglu Song, Peng Gao, Yu Liu, Chunyuan Li, Hongsheng Li},
99
+ booktitle={arXiv},
100
+ year={2024}
101
+ }
102
+ ```
103
+
104
+ ## 🧠 Related Work
105
+
106
+ Explore our additional research on **Vision-Language Large Models**:
107
+
108
+ - **[MathVerse]** [MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?](https://mathverse-cuhk.github.io/)
109
+ - **[MathVista]** [MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts](https://github.com/lupantech/MathVista)
110
+ - **[LLaMA-Adapter]** [LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention](https://github.com/OpenGVLab/LLaMA-Adapter)
111
+ - **[LLaMA-Adapter V2]** [LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model](https://github.com/OpenGVLab/LLaMA-Adapter)
112
+ - **[ImageBind-LLM]** [Imagebind-LLM: Multi-modality Instruction Tuning](https://github.com/OpenGVLab/LLaMA-Adapter/tree/main/imagebind_LLM)
113
+ - **[SPHINX]** [The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal LLMs](https://github.com/Alpha-VLLM/LLaMA2-Accessory/tree/main/SPHINX)
114
+ - **[SPHINX-X]** [Scaling Data and Parameters for a Family of Multi-modal Large Language Models](https://github.com/Alpha-VLLM/LLaMA2-Accessory/tree/main/SPHINX)
115
+ - **[Point-Bind & Point-LLM]** [Multi-modality 3D Understanding, Generation, and Instruction Following](https://github.com/ZiyuGuo99/Point-Bind_Point-LLM)
116
+ - **[PerSAM]** [Personalize segment anything model with one shot](https://github.com/ZrrSkywalker/Personalize-SAM)
117
+ - **[CoMat]** [CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching](https://caraj7.github.io/comat/)