Our great sponsors
-
transformer-deploy
Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
Regarding CPU inference, quantization is very easy, and supported by Transformer-deploy , however performance on transformer are very low outside corner cases (like no batch, very short sequence and distilled model), and last Intel generation CPU based instance like C6 or M6 on AWS are quite expensive compared to a cheap GPU like Nvidia T4, to say it otherwise, on transformer, until you are ok with slow inference and takes a small instance (for a PoC for instance), CPU inference is probably not a good idea.
If you are ever interested in looking at pruning, happy to integrate my open source library https://github.com/marsupialtail/sparsednn. Latest update has unstructured and structured sparse int8 kernels. 3x speedup over dense int8 at 90 percent sparsity with 1x4 blocks.
Have you tried the new Torch-TensorRT compiler from NVIDIA?
https://github.com/open-mmlab/mmrazor ,it may work for you~
Related posts
- Learn TensorRT optimization
- FLaNK Stack 05 Feb 2024
- [D] Is there an affordable way to host a diffusers Stable Diffusion model publicly on the Internet for "real-time"-inference? (CPU or Serverless GPU?)
- [D]deploy stable diffusion
- 30% Faster than xformers? voltaML vs xformers stable diffusion - NVIDIA 4090