kserve
Juice-Labs
Our great sponsors
kserve | Juice-Labs | |
---|---|---|
3 | 20 | |
3,047 | 387 | |
7.3% | 5.9% | |
9.4 | 8.7 | |
7 days ago | 4 months ago | |
Python | Go | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kserve
-
Show HN: Software for Remote GPU-over-IP
Inference servers essentially turn a model running on CPU and/or GPU hardware into a microservice.
Many of them support the kserve API standard[0] that supports everything from model loading/unloading to (of course) inference requests across models, versions, frameworks, etc.
So in the case of Triton[1] you can have any number of different TensorFlow/torch/tensorrt/onnx/etc models, versions, and variants. You can have one or more Triton instances running on hardware with access to local GPUs (for this example). Then you can put standard REST and or grpc load balancers (or whatever you want) in front of them, hit them via another API, whatever.
Now all your applications need to do to perform inference is do an HTTP POST (or use a client[2]) for model input, Triton runs it on a GPU (or CPU if you want), and you get back whatever the model output is.
Not a sales pitch for Triton but it (like some others) can also do things like dynamic batching with QoS parameters, automated model profiling and performance optimization[3], really granular control over resources, response caching, python middleware for application/biz logic, accelerated media processing with Nvidia DALI, all kinds of stuff.
[0] - https://github.com/kserve/kserve
[1] - https://github.com/triton-inference-server/server
[2] - https://github.com/triton-inference-server/client
[3] - https://github.com/triton-inference-server/model_analyzer
-
Run your first Kubeflow pipeline
Kubeflow has multiple components: central dashboard, Kubeflow Notebooks to manage Jupyter notebooks, Kubeflow Pipelines for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers, KF Serving for model serving (apparently superseded by KServe), Katib for hyperparameter tuning and model search, and training operators such as TFJob for training TF models on Kubernetes.
-
[D] Serverless solutions for GPU inference (if there's such a thing)
If you can run on Kubernetes then KFServing is an open source solution that allows for GPU inference and is built upon Knative to allow scale to zero for GPU based inference. From release 0.5 it also has capabilities for multi-model serving as a alpha feature to allow multiple models to share the same server (and via NVIDIA Triton the same GPU).
Juice-Labs
- GPU-over-IP for LLM inference?
- GTA 5 running in Qemu without PCI Passthrough using Juicy Labs
-
This looks very cool: GPU-over-IP with Juice. You can attach GPU to non GPU nodes, share GPU across multiple users and applications, bring GPU to your data (vs bringing your data to the GPU) - all with just software.
The website https://www.juicelabs.co/ they have an community version as well https://github.com/Juice-Labs/Juice-Labs
-
EGPU ALTERNATIVE?
I recently discovered juicelabs.co but I have not yet tested it. Maybe worth a look.
-
Why I think 3D artists should get an eGPU for rendering, even if they have a desktop [How stuff works + Idea]
Or you could even use a remote GPU like Juice GPU
-
Using Cloud-GPU as an eGPU?
check out https://www.juicelabs.co/
-
Looking for a Bitfusion replacement? I think I may have found something really cool... Juice - which not only supports CUDA but all the graphical APIs
So our lab had been using Bitfusion until recently for a large number of VM deployments. With Bitfusion support coming to an end, we were talking about solutions and did some Googleing around GPU-over-IP and stumbled across these guys: www.juicelabs.co
-
is it possible to install Automatic1111 and manage it like locally, but using a shared gpu service such as runpod.io/endpoints?
The Juice may help passing gpu over IP, I haven't tried it yet though
-
ClosedAI strikes again
Even then you can always use Juice. https://www.juicelabs.co/
-
Multiple inference, single remote GPU of Stable Diffusion
The functionality to do this today is available via our community edition here: https://github.com/Juice-Labs/Juice-Labs/wiki
What are some alternatives?
kubeflow - Machine Learning Toolkit for Kubernetes
Easy-GPU-P - A Project dedicated to making GPU Partitioning on Windows easier!
aws-virtual-gpu-device-plugin - AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads
vgpu_unlock - Unlock vGPU functionality for consumer grade GPUs.
awesome-mlops - A curated list of references for MLOps
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
kind - Kubernetes IN Docker - local clusters for testing Kubernetes
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality
kubeflow-learn
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Python-Schema-Matching - A python tool using XGboost and sentence-transformers to perform schema matching task on tables.
ml-stable-diffusion - Stable Diffusion with Core ML on Apple Silicon