Weiyun1025 commited on
Commit
c5d8dc2
β€’
1 Parent(s): b7e8834

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +22 -22
README.md CHANGED
@@ -5,9 +5,9 @@ pipeline_tag: visual-question-answering
5
 
6
  # InternVL2-8B
7
 
8
- [\[πŸ†• Blog\]](https://internvl.github.io/blog/) [\[πŸ“œ InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[πŸ“œ InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[πŸ—¨οΈ Chat Demo\]](https://internvl.opengvlab.com/)
9
 
10
- [\[πŸ€— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[πŸš€ Quick Start\]](#model-usage) [\[πŸ“– 中文解读\]](https://zhuanlan.zhihu.com/p/675877376)
11
 
12
  ## Introduction
13
 
@@ -23,26 +23,26 @@ InternVL2 is a multimodal large language model series, featuring models of vario
23
 
24
  ## Performance
25
 
26
- | Benchmark | MiniCPM-Llama3-V-2_5 | InternVL2-8B |
27
- | :--------------------------: | :------------------: | :----------: |
28
- | Model Size | 8.5B | 8.1B |
29
- | | | |
30
- | DocVQA<sub>test</sub> | 84.8 | 91.6 |
31
- | ChartQA<sub>test</sub> | - | 83.3 |
32
- | InfoVQA<sub>test</sub> | - | 74.8 |
33
- | TextVQA<sub>val</sub> | 76.6 | 77.4 |
34
- | OCRBench | 725 | 794 |
35
- | MME<sub>sum</sub> | 2024.6 | 2210.3 |
36
- | RealWorldQA | 63.5 | 64.4 |
37
- | AI2D<sub>test</sub> | 78.4 | 83.8 |
38
- | MMMU<sub>val</sub> | 45.8 | 49.3 |
39
- | MMBench-EN<sub>test</sub> | 77.2 | 81.7 |
40
- | MMBench-CN<sub>test</sub> | 74.2 | 81.2 |
41
- | CCBench<sub>dev</sub> | 45.9 | 75.9 |
42
- | MMVet<sub>GPT-4-0613</sub> | - | 60.0 |
43
- | SEED-Image | 72.3 | 76.2 |
44
- | HallBench<sub>avg</sub> | 42.4 | 45.2 |
45
- | MathVista<sub>testmini</sub> | 54.3 | 58.3 |
46
 
47
  - We simultaneously use InternVL and VLMEvalKit repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. MMMU, OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
48
 
 
5
 
6
  # InternVL2-8B
7
 
8
+ [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[πŸ†• Blog\]](https://internvl.github.io/blog/) [\[πŸ“œ InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[πŸ“œ InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821)
9
 
10
+ [\[πŸ—¨οΈ Chat Demo\]](https://internvl.opengvlab.com/) [\[πŸ€— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[πŸš€ Quick Start\]](#quick-start) [\[πŸ“– 中文解读\]](https://zhuanlan.zhihu.com/p/675877376)
11
 
12
  ## Introduction
13
 
 
23
 
24
  ## Performance
25
 
26
+ | Benchmark | MiniCPM-Llama3-V-2_5 | InternVL-Chat-V1-5 | InternVL2-8B |
27
+ | :--------------------------: | :------------------: | :----------: | :----------: |
28
+ | Model Size | 8.5B | | 8.1B |
29
+ | | | | |
30
+ | DocVQA<sub>test</sub> | 84.8 | | 91.6 |
31
+ | ChartQA<sub>test</sub> | - | | 83.3 |
32
+ | InfoVQA<sub>test</sub> | - | | 74.8 |
33
+ | TextVQA<sub>val</sub> | 76.6 | | 77.4 |
34
+ | OCRBench | 725 | | 794 |
35
+ | MME<sub>sum</sub> | 2024.6 | | 2210.3 |
36
+ | RealWorldQA | 63.5 | | 64.4 |
37
+ | AI2D<sub>test</sub> | 78.4 | | 83.8 |
38
+ | MMMU<sub>val</sub> | 45.8 | | 49.3 |
39
+ | MMBench-EN<sub>test</sub> | 77.2 | | 81.7 |
40
+ | MMBench-CN<sub>test</sub> | 74.2 | | 81.2 |
41
+ | CCBench<sub>dev</sub> | 45.9 | | 75.9 |
42
+ | MMVet<sub>GPT-4-0613</sub> | - | | 60.0 |
43
+ | SEED-Image | 72.3 | | 76.2 |
44
+ | HallBench<sub>avg</sub> | 42.4 | | 45.2 |
45
+ | MathVista<sub>testmini</sub> | 54.3 | | 58.3 |
46
 
47
  - We simultaneously use InternVL and VLMEvalKit repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. MMMU, OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
48