support tokenised prompt (online vllm)
#17
by
Payoto
- opened
Online vLLM inference passes an already pre-processed text prompt to the multimodal preprocessor.
This comment has been hidden
Payoto
changed pull request status to
open