Sonar helps you commit clean code every time. With over 600 unique rules to find Java bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work. Learn more →
Serve Alternatives
Similar projects and alternatives to serve
-
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)
-
-
InfluxDB
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.
-
JavaScriptClassifier
[Moved to: https://github.com/JonathanSum/JavaScriptClassifier]
-
pinferencia
Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
-
kernl
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
-
ML-Workspace
🛠 All-in-one web-based IDE specialized for machine learning and data science.
-
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
-
Sonar
Write Clean Java Code. Always.. Sonar helps you commit clean code every time. With over 600 unique rules to find Java bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
-
-
deepsparse
Inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application
-
transformer-deploy
Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
-
optimum
🏎️ Accelerate training and inference of 🤗 Transformers with easy to use hardware optimization tools
-
torchdynamo
A Python-level JIT compiler designed to make unmodified PyTorch programs faster.
-
openai-whisper-cpu
Improving transcription performance of OpenAI Whisper for CPU based deployment
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
serve reviews and mentions
-
BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
I did a Space to showcase a bit the speedups we can have in a end-to-end case with TorchServe to deploy the model on a cloud instance (AWS EC2 g4dn, using one T4 GPU): https://huggingface.co/spaces/fxmarty/bettertransformer-demo
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 2), I am aware of a few options. Triton inference server is an obvious one as is the ‘transformer-deploy’ version from LDS. My only reservation here is that they require the model compilation or are architecture specific. I am aware of others like Bento, Ray serving and TorchServe. Ideally I would have something that allows any (PyTorch model) to be used without the extra compilation effort (or at least optionally) and has some convenience things like ease of use, easy to deploy, easy to host multiple models and can perform some dynamic batching. Anyway, I am really interested to hear people's experience here as I know there are now quite a few options! Any help is appreciated! Disclaimer - I have no affiliation or are connected in any way with the libraries or companies listed here. These are just the ones I know of. Thanks in advance.
- Choose JavaScript 🧠
-
Popular Machine Learning Deployment Tools
GitHub
-
A note from our sponsor - Sonar
www.sonarsource.com | 29 Mar 2023
Stats
pytorch/serve is an open source project licensed under Apache License 2.0 which is an OSI approved license.