README / README.md
xiaotianhan's picture
Update README.md
f37b168 verified
|
raw
history blame
3.45 kB
metadata
title: README
emoji: πŸš€
colorFrom: indigo
colorTo: pink
sdk: static
pinned: false

InfiMM-logo


InfiMM

InfiMM, inspired by the Flamingo architecture, sets itself apart with unique training data and diverse large language models (LLMs). This approach allows InfiMM to maintain the core strengths of Flamingo while offering enhanced capabilities. As the premier open-sourced variant in this domain, InfiMM excels in accessibility and adaptability, driven by community collaboration. It's more than an emulation of Flamingo; it's an innovation in visual language processing.

Our model is another attempt to produce the result reported in the paper "Flamingo: A Large-scale Visual Language Model for Multimodal Understanding" by DeepMind. Compared with previous open-sourced attempts (OpenFlamingo and IDEFIC), InfiMM offers a more flexible models, allowing for a wide range of applications. In particular, InfiMM integrates the latest LLM models into VLM domain the reveals the impact of LLMs with different scales and architectures.

Please note that InfiMM is currently in beta stage and we are continuously working on improving it.

News

  • πŸŽ‰ [2024.01.11] We release the first set of MLLMs we trained InfiMM-Zephyr, InfiMM-LLaMA13B and InfiMM-Vicuna13B.
  • πŸŽ‰ [2024.01.10] We release a survey about Multimodal Large Language Models (MLLMs) reasoning capability at here.
  • πŸŽ‰ [2023.11.18] We release InfiMM-Eval at here, an Open-ended VQA benchmark dataset specifically designed for MLLMs, with a focus on complex reasoning tasks. The leaderboard can be found via Papers with Code or project page.