Image-Text-to-Text
Transformers
PyTorch
English
llava
text-generation
Inference Endpoints
SpursgoZmy commited on
Commit
ca2a234
1 Parent(s): ca19a28

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -33,7 +33,7 @@ It was trained with a two-stage pipeline as LLaVA:
33
  2. Instruction tuning: train the vision-language connector and the base LLM with multimodal instruction following data of tabular and non-tabular tasks.
34
 
35
  **Code Base:** We use the official code of [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA) for model training and inference,
36
- and the saved model checkpoint is uploaded to this repository.
37
 
38
  **Model Date:** Table-LLaVA 7B was trained in January 2024.
39
 
 
33
  2. Instruction tuning: train the vision-language connector and the base LLM with multimodal instruction following data of tabular and non-tabular tasks.
34
 
35
  **Code Base:** We use the official code of [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA) for model training and inference,
36
+ and the saved model checkpoint is uploaded to this repository. Thus, Table LLaVA can be used in the same way as the normal LLaVA v1.5 model with its original code.
37
 
38
  **Model Date:** Table-LLaVA 7B was trained in January 2024.
39