SaaSHub helps you find the best software and product alternatives Learn more →
Serve Alternatives
Similar projects and alternatives to serve
-
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
pinferencia
Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
-
BentoML
The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!
-
swiss_army_llama
A FastAPI service for semantic text search using precomputed embeddings and advanced similarity measures, with built-in support for various file types through textract.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
kernl
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
-
optimum
🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
-
transformer-deploy
Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
-
openembeddings
Discontinued Self-hostable pay for what you use embedding server for bge-large-en and arbitrary embedding models using crypto
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
serve reviews and mentions
-
Show Show HN: Llama2 Embeddings FastAPI Server
What's wrong with just using Torchserve[1]? We've been using it to serve embedding models in production.
[1] https://pytorch.org/serve/
-
How to leverage a local LLM for a client?
Looks like you are already up to speed loading LLaMa models which is great. Assuming this is Hugging Face PyTorch checkpoint, I think it should be possible to spin up a TorchServe instance which has in-built support for API access and HF Transformers. Since scale and latency aren’t a big concern for you, this should be good enough start.
- Is there a course that teaches you how to make an API with a trained model?
-
Pytorch eating memory on every api call
You could split the service in two, flask for the web part and a service to serve the model, I haven't used it but there is https://pytorch.org/serve/
-
Google Kubernetes Engine : Unable to access ports exposed on external IP
I'm attempting to set up inference for a torchserve container, and it's really tough to figure out what's not allowing me to connect to my network with the ports that I'm trying to expose. I'm using Google Kubernetes Engine and Helm via tweaking one of the tutorials at [torchserve](github.com/pytorch/serve). Specifically, it's the GKE tutorial [here](https://github.com/pytorch/serve/tree/master/kubernetes).
-
BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
I did a Space to showcase a bit the speedups we can have in a end-to-end case with TorchServe to deploy the model on a cloud instance (AWS EC2 g4dn, using one T4 GPU): https://huggingface.co/spaces/fxmarty/bettertransformer-demo
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 2), I am aware of a few options. Triton inference server is an obvious one as is the ‘transformer-deploy’ version from LDS. My only reservation here is that they require the model compilation or are architecture specific. I am aware of others like Bento, Ray serving and TorchServe. Ideally I would have something that allows any (PyTorch model) to be used without the extra compilation effort (or at least optionally) and has some convenience things like ease of use, easy to deploy, easy to host multiple models and can perform some dynamic batching. Anyway, I am really interested to hear people's experience here as I know there are now quite a few options! Any help is appreciated! Disclaimer - I have no affiliation or are connected in any way with the libraries or companies listed here. These are just the ones I know of. Thanks in advance.
-
how to integrate a deep learning model into a Django webapp!?
If you built the model using pytorch or tensorflow, I'd suggest using torchserve or TF serving to serve the model as its own "microservice," then query it from your django app. Among other things, it will make retraining and updating your model a lot easier.
- Choose JavaScript ðŸ§
-
Popular Machine Learning Deployment Tools
GitHub
-
A note from our sponsor - SaaSHub
www.saashub.com | 24 Apr 2024
Stats
pytorch/serve is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of serve is Java.
Sponsored