Our great sponsors
-
pinferencia
Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
When reading this post, you perhaps have already known or tried torchserve, triton, seldon core, tf serving, even kserve. They are good products. However, if you are not using a very simple model or you have written many codes and the model is just a part of it. It is not that easy to integrate your codes with them. Here, you have an alternative: Pinferencia (More tutorial, please visit:https://pinferencia.underneathall.app/)
NOTE:
The number of mentions on this list indicates mentions on common posts plus user suggested alternatives.
Hence, a higher number means a more popular project.
Related posts
- Show HN: Pinferencia, Deploy Your AI Models with Pretty UI and REST API
- Stop Writing Flask to Serve/Deploy Your Model: Pinferencia is Here
- Looking for a reference design pattern for an image to image microservice
- Pre-trained Model with Fine Tuning/Transfer Learning or Design and Train from Scratch?
- [D] Pre-trained Model with Fine Tuning/Transfer Learning or Design and Train from Scratch?