vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs
Why do you think that https://github.com/vllm-project/vllm is a good alternative to vllm-rocm
vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs
Why do you think that https://github.com/vllm-project/vllm is a good alternative to vllm-rocm