MMSearch / README.md
CaraJ's picture
Update README.md
e073f08 verified
|
raw
history blame
No virus
5.58 kB
metadata
task_categories:
  - question-answering
  - visual-question-answering
language:
  - en
tags:
  - Multimodal Search
size_categories:
  - n<1K
configs:
  - config_name: end2end
    data_files:
      - split: end2end
        path: end2end.parquet
dataset_info:
  - config_name: end2end
    features:
      - name: sample_id
        dtype: string
      - name: query
        dtype: string
      - name: query_image
        dtype: image
      - name: image_search_result
        dtype: image
      - name: area
        dtype: string
      - name: subfield
        dtype: string
      - name: timestamp
        dtype: string
      - name: gt_requery
        dtype: string
      - name: gt_answer
        dtype: string
    splits:
      - name: end2end
        num_examples: 300

MMSearch πŸ”₯: Benchmarking the Potential of Large Models as Multi-modal Search Engines

Official repository for the paper "MMSearch: Benchmarking the Potential of Large Models as Multi-modal Search Engines".

🌟 For more details, please refer to the project page with dataset exploration and visualization tools: https://mmsearch.github.io/.

[🌐 Webpage] [πŸ“– Paper] [πŸ€— Huggingface Dataset] [πŸ† Leaderboard] [πŸ” Visualization]

πŸ’₯ News

πŸ“Œ ToDo

  • Coming soon: Evaluation codes

πŸ‘€ About MMSearch

The capabilities of Large Multi-modal Models (LMMs) in multimodal search remain insufficiently explored and evaluated. To fill the blank of a framework for LMM to conduct multimodal AI search engine, we first design a delicate pipeline MMSearch-Engine to facilitate any LMM to function as a multimodal AI search engine


The overview of MMSearch-Engine.

To further evaluate the potential of LMMs in the multimodal search domain, we introduce MMSearch, an all-around multimodal search benchmark designed for assessing the multimodal search performance. The benchmark contains 300 manually collected instances spanning 14 subfields, which involves no overlap with the current LMMs' training data, ensuring the correct answer can only be obtained within searching.


The overview of MMSearch.

In addition, we propose a step-wise evaluation strategy to better understand the LMMs' searching capability. The models are evaluated by performing three individual tasks (requery, rerank, and summarization), and one challenging end-to-end task with a complete searching process. The final score is weighted by the four tasks.


Outline of Evaluation Tasks, Inputs, and Outputs.

An example of LMM input, output, and ground truth for four evaluation tasks is shown here.

πŸ† Leaderboard

Contributing to the Leaderboard

🚨 The Leaderboard is continuously being updated, welcoming the contribution of your excellent LMMs!

:white_check_mark: Citation

If you find MMSearch useful for your research and applications, please kindly cite using this BibTeX:

@article{jiang2024mmsearch,
  title={MMSearch: Benchmarking the Potential of Large Models as Multi-modal Search Engines},
  author={Dongzhi Jiang, Renrui Zhang, Ziyu Guo, Yanmin Wu, Jiayi Lei, Pengshuo Qiu, Pan Lu, Zehui Chen, Guanglu Song, Peng Gao, Yu Liu, Chunyuan Li, Hongsheng Li},
  booktitle={arXiv},
  year={2024}
}

🧠 Related Work

Explore our additional research on Vision-Language Large Models: