xiaotianhan commited on
Commit
6dbdea3
β€’
1 Parent(s): f37b168

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -33,7 +33,7 @@ In particular, InfiMM integrates the latest LLM models into VLM domain the revea
33
  Please note that InfiMM is currently in beta stage and we are continuously working on improving it.
34
 
35
  ## News
36
-
37
  - πŸŽ‰ **[2024.01.11]** We release the first set of MLLMs we trained [InfiMM-Zephyr](https://huggingface.co/Infi-MM/infimm-zephyr), [InfiMM-LLaMA13B](https://huggingface.co/Infi-MM/infimm-llama13b) and [InfiMM-Vicuna13B](https://huggingface.co/Infi-MM/infimm-vicuna13b).
38
  - πŸŽ‰ **[2024.01.10]** We release a survey about Multimodal Large Language Models (MLLMs) reasoning capability at [here](https://huggingface.co/papers/2401.06805).
39
  - πŸŽ‰ **[2023.11.18]** We release InfiMM-Eval at [here](https://arxiv.org/abs/2311.11567), an Open-ended VQA benchmark dataset specifically designed for MLLMs, with a focus on complex reasoning tasks. The leaderboard can be found via [Papers with Code](https://paperswithcode.com/sota/visual-question-answering-vqa-on-core-mm) or [project page](https://infimm.github.io/InfiMM-Eval/).
 
33
  Please note that InfiMM is currently in beta stage and we are continuously working on improving it.
34
 
35
  ## News
36
+ - πŸŽ‰ **[2024.03.02]** We release the [InfiMM-HD](https://huggingface.co/Infi-MM/infimm-hd).
37
  - πŸŽ‰ **[2024.01.11]** We release the first set of MLLMs we trained [InfiMM-Zephyr](https://huggingface.co/Infi-MM/infimm-zephyr), [InfiMM-LLaMA13B](https://huggingface.co/Infi-MM/infimm-llama13b) and [InfiMM-Vicuna13B](https://huggingface.co/Infi-MM/infimm-vicuna13b).
38
  - πŸŽ‰ **[2024.01.10]** We release a survey about Multimodal Large Language Models (MLLMs) reasoning capability at [here](https://huggingface.co/papers/2401.06805).
39
  - πŸŽ‰ **[2023.11.18]** We release InfiMM-Eval at [here](https://arxiv.org/abs/2311.11567), an Open-ended VQA benchmark dataset specifically designed for MLLMs, with a focus on complex reasoning tasks. The leaderboard can be found via [Papers with Code](https://paperswithcode.com/sota/visual-question-answering-vqa-on-core-mm) or [project page](https://infimm.github.io/InfiMM-Eval/).