Datasets:
BAAI
/

Modalities:
Image
Text
Formats:
parquet
Languages:
Chinese
ArXiv:
Libraries:
Datasets
Dask
License:
hezheqi commited on
Commit
b3b6556
1 Parent(s): 21a65cb

Update readme

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -43,9 +43,11 @@ configs:
43
  - "val/*.parquet"
44
  ---
45
  # CMMU
46
- [**📖 Paper**](https://arxiv.org/) | [**🤗 Dataset**](https://huggingface.co/datasets) | [**GitHub**](https://github.com/FlagOpen/CMMU)
47
 
48
- This repo contains the evaluation code for the paper [**CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning**](https://arxiv.org/) .
 
 
49
 
50
  ## Introduction
51
  CMMU is a novel multi-modal benchmark designed to evaluate domain-specific knowledge across seven foundational subjects: math, biology, physics, chemistry, geography, politics, and history. It comprises 3603 questions, incorporating text and images, drawn from a range of Chinese exams. Spanning primary to high school levels, CMMU offers a thorough evaluation of model capabilities across different educational stages.
@@ -74,7 +76,7 @@ We currently evaluated 10 models on CMMU. The results are shown in the following
74
  @article{he2024cmmu,
75
  title={CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning},
76
  author={Zheqi He, Xinya Wu, Pengfei Zhou, Richeng Xuan, Guang Liu, Xi Yang, Qiannan Zhu and Hua Huang},
77
- journal={arXiv preprint},
78
  year={2024},
79
  }
80
  ```
 
43
  - "val/*.parquet"
44
  ---
45
  # CMMU
46
+ [**📖 Paper**](https://arxiv.org/abs/2401.14011) | [**🤗 Dataset**](https://huggingface.co/datasets) | [**GitHub**](https://github.com/FlagOpen/CMMU)
47
 
48
+ This repo contains the evaluation code for the paper [**CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning**](https://arxiv.org/abs/2401.14011) .
49
+
50
+ We release the validation set of CMMU, you can download it from [here](https://huggingface.co/datasets/BAAI/CMMU). The test set will be hosted on the [flageval platform](https://flageval.baai.ac.cn/). Users can test by uploading their models.
51
 
52
  ## Introduction
53
  CMMU is a novel multi-modal benchmark designed to evaluate domain-specific knowledge across seven foundational subjects: math, biology, physics, chemistry, geography, politics, and history. It comprises 3603 questions, incorporating text and images, drawn from a range of Chinese exams. Spanning primary to high school levels, CMMU offers a thorough evaluation of model capabilities across different educational stages.
 
76
  @article{he2024cmmu,
77
  title={CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning},
78
  author={Zheqi He, Xinya Wu, Pengfei Zhou, Richeng Xuan, Guang Liu, Xi Yang, Qiannan Zhu and Hua Huang},
79
+ journal={arXiv preprint arXiv:2401.14011},
80
  year={2024},
81
  }
82
  ```