support tokenised prompt (online vllm)

#17
by Payoto - opened

Online vLLM inference passes an already pre-processed text prompt to the multimodal preprocessor.

This comment has been hidden
Payoto changed pull request status to open
Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment