vLLM Omni
Easy, fast, and cheap omni-modality model serving for everyone
Open-source framework for omni-modality model inference and serving, extending vLLM to handle text, image, video, and audio with non-autoregressive architectures and an OpenAI-compatible API server.

Recent stories
0 linked stories
No linked stories yet.