Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
Why do you think that https://github.com/Dao-AILab/flash-attention is a good alternative to transformer-deploy
Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
Why do you think that https://github.com/Dao-AILab/flash-attention is a good alternative to transformer-deploy