server
serve
server | serve | |
---|---|---|
27 | 11 | |
8,144 | 4,172 | |
2.6% | 1.0% | |
9.4 | 9.5 | |
1 day ago | 5 days ago | |
Python | Java | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
server
-
Everything you need to know about Python 3.13 – JIT and GIL went up the hill
As always, it depends a lot on what you're doing, and a lot of people are using Python for AI.
One of the drawbacks of multi-processing versus multi-threading is that you cannot share memory (easily, cheaply) between processes. During model training, and even during inference, this becomes a problem.
For example, imagine a high volume, low latency, synchronous computer vision inference service. If you're handling each request in a different process, then you're going to have to jump through a bunch of hoops to make this performant. For example, you'll need to use shared memory to move data around, because images are large, and sockets are slow. Another issue is that each process will need a different copy of the model in GPU memory, which is a problem in a world where GPU memory is at a premium. You could of course have a single process for the GPU processing part of your model, and then automatically batch inputs into this process, etc. etc. (and people do) but all this is just to work around the lack of proper threading support in Python.
By the way, if anyone is struggling with these challenges today, I recommend taking a peek at nvidia's [triton](https://github.com/triton-inference-server/server) inference server, which handles a lot of these details for you. It supports things like zero-copy sharing of tensors between parts of your model running in different processes/threads and does auto-batching between requests as well. Especially auto-batching gave us big throughput increase with a minor latency penalty!
- Best LLM Inference Engines and Servers to Deploy LLMs in Production
- FLaNK Weekly 08 Jan 2024
- Is there any open source app to load a model and expose API like OpenAI?
- "A matching Triton is not available"
-
best way to serve llama V2 (llama.cpp VS triton VS HF text generation inference)
I am wondering what is the best / most cost-efficient way to serve llama V2. - llama.cpp (is it production ready or just for playing around?) ? - Triton inference server ? - HF text generation inference ?
- Triton Inference Server - Backend
-
Single RTX 3080 or two RTX 3060s for deep learning inference?
For inference of CNNs, memory should really not be an issue. If it is a software engineering problem, not a hardware issue. FP16 or Int8 for weights is fine and weight size won’t increase due to the high resolution. And during inference memory used for hidden layer tensors can be reused as soon as the last consumer layer has been processed. You likely using something that is designed for training for inference and that blows up the memory requirement, or if you are using TensorRT or something like that, you need to be careful to avoid that every tasks loads their own copy of the library code into the GPU. Maybe look at https://github.com/triton-inference-server/server
-
Machine Learning Inference Server in Rust?
I am looking for something like [Triton Inference Server](https://github.com/triton-inference-server/server) or [TFX Serving](https://www.tensorflow.org/tfx/guide/serving), but in Rust. I came across [Orkon](https://github.com/vertexclique/orkhon) which seems to be dormant and a bunch of examples off of the [Awesome-Rust-MachineLearning](https://github.com/vaaaaanquish/Awesome-Rust-MachineLearning)
-
Multi-model serving options
You've already mentioned Seldon Core which is well worth looking at but if you're just after the raw multi-model serving aspect rather than a fully-fledged deployment framework you should maybe take a look at the individual inference servers: Triton Inference Server and MLServer both support multi-model serving for a wide variety of frameworks (and custom python models). MLServer might be a better option as it has an MLFlow runtime but only you will be able to decide that. There also might be other inference servers that do MMS that I'm not aware of.
serve
-
Show Show HN: Llama2 Embeddings FastAPI Server
What's wrong with just using Torchserve[1]? We've been using it to serve embedding models in production.
[1] https://pytorch.org/serve/
-
How to leverage a local LLM for a client?
Looks like you are already up to speed loading LLaMa models which is great. Assuming this is Hugging Face PyTorch checkpoint, I think it should be possible to spin up a TorchServe instance which has in-built support for API access and HF Transformers. Since scale and latency aren’t a big concern for you, this should be good enough start.
- Is there a course that teaches you how to make an API with a trained model?
-
Pytorch eating memory on every api call
You could split the service in two, flask for the web part and a service to serve the model, I haven't used it but there is https://pytorch.org/serve/
-
Google Kubernetes Engine : Unable to access ports exposed on external IP
I'm attempting to set up inference for a torchserve container, and it's really tough to figure out what's not allowing me to connect to my network with the ports that I'm trying to expose. I'm using Google Kubernetes Engine and Helm via tweaking one of the tutorials at [torchserve](github.com/pytorch/serve). Specifically, it's the GKE tutorial [here](https://github.com/pytorch/serve/tree/master/kubernetes).
-
BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
I did a Space to showcase a bit the speedups we can have in a end-to-end case with TorchServe to deploy the model on a cloud instance (AWS EC2 g4dn, using one T4 GPU): https://huggingface.co/spaces/fxmarty/bettertransformer-demo
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 2), I am aware of a few options. Triton inference server is an obvious one as is the ‘transformer-deploy’ version from LDS. My only reservation here is that they require the model compilation or are architecture specific. I am aware of others like Bento, Ray serving and TorchServe. Ideally I would have something that allows any (PyTorch model) to be used without the extra compilation effort (or at least optionally) and has some convenience things like ease of use, easy to deploy, easy to host multiple models and can perform some dynamic batching. Anyway, I am really interested to hear people's experience here as I know there are now quite a few options! Any help is appreciated! Disclaimer - I have no affiliation or are connected in any way with the libraries or companies listed here. These are just the ones I know of. Thanks in advance.
-
how to integrate a deep learning model into a Django webapp!?
If you built the model using pytorch or tensorflow, I'd suggest using torchserve or TF serving to serve the model as its own "microservice," then query it from your django app. Among other things, it will make retraining and updating your model a lot easier.
- Choose JavaScript 🧠
-
Popular Machine Learning Deployment Tools
GitHub
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
serving - A flexible, high-performance serving system for machine learning models
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
JavaScriptClassifier - [Moved to: https://github.com/JonathanSum/JavaScriptClassifier]
submarine - Submarine is Cloud Native Machine Learning Platform.
Triton - Triton is a dynamic binary analysis library. Build your own program analysis tools, automate your reverse engineering, perform software verification or just emulate code.
kernl - Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
Megatron-LM - Ongoing research training transformer models at scale
swiss_army_llama - A FastAPI service for semantic text search using precomputed embeddings and advanced similarity measures, with built-in support for various file types through textract.