Edit model card

KoLLaVA : Korean Large Language and Vision Assistant (feat. LLaVA)

This model is a large multimodal model (LMM) that combines the LLM (Synatra) with visual encoder of CLIP (clip-vit-large-patch14-336 ), trained on Korean visual-instruction dataset (KoLLaVA-v1.5-Instruct-581k).

Detail codes are available at KoLLaVA github repository

License

This model is strictly non-commercial (cc-by-sa-4.0) use, Under 5K MAU The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-sa-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. If your service has over 5K MAU contact me for license approval.

Downloads last month
294
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.