Papers
arxiv:2311.00571

LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing

Published on Nov 1, 2023
ยท Submitted by akhaliq on Nov 2, 2023
#2 Paper of the day

Abstract

LLaVA-Interactive is a research prototype for multimodal human-AI interaction. The system can have multi-turn dialogues with human users by taking multimodal user inputs and generating multimodal responses. Importantly, LLaVA-Interactive goes beyond language prompt, where visual prompt is enabled to align human intents in the interaction. The development of LLaVA-Interactive is extremely cost-efficient as the system combines three multimodal skills of pre-built AI models without additional model training: visual chat of LLaVA, image segmentation from SEEM, as well as image generation and editing from GLIGEN. A diverse set of application scenarios is presented to demonstrate the promises of LLaVA-Interactive and to inspire future research in multimodal interactive systems.

Community

This comment has been hidden
This comment has been hidden

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

descrobela
}

page_1.png

extract all the fields and return in json

This comment has been hidden
This comment has been hidden

what kind of images can I try ?

Sign up or log in to comment

Models citing this paper 8

Browse 8 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2311.00571 in a dataset README.md to link it from this page.

Spaces citing this paper 2

Collections including this paper 26