LLaVA-VideoGameVQA - Work In Progress - Model Card
Model details
Model type: LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture.
Model date: LLaVA-v1.5-13B-LoRA was trained in December 2023.
LoRA Weights
- Checkpoint 1 trained on
28K
question-answering pairs. Base Model:liuhaotian/llava-v1.5-13b
- Checkpoint 5 trained on
74K
question-answering pairs. Base Model:liuhaotian/llava-v1.5-13b
- Checkpoint 8 trained on
185K
question-answering pairs. Base Model:liuhaotian/llava-v1.5-13b
How to run
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./lora-checkpoints-8 --model-base liuhaotian/llava-v1.5-13b