Juice-Labs
client
Juice-Labs | client | |
---|---|---|
20 | 2 | |
387 | 483 | |
2.3% | 3.1% | |
8.7 | 9.4 | |
4 months ago | about 10 hours ago | |
Go | C++ | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Juice-Labs
- GPU-over-IP for LLM inference?
- GTA 5 running in Qemu without PCI Passthrough using Juicy Labs
-
This looks very cool: GPU-over-IP with Juice. You can attach GPU to non GPU nodes, share GPU across multiple users and applications, bring GPU to your data (vs bringing your data to the GPU) - all with just software.
The website https://www.juicelabs.co/ they have an community version as well https://github.com/Juice-Labs/Juice-Labs
-
EGPU ALTERNATIVE?
I recently discovered juicelabs.co but I have not yet tested it. Maybe worth a look.
-
Why I think 3D artists should get an eGPU for rendering, even if they have a desktop [How stuff works + Idea]
Or you could even use a remote GPU like Juice GPU
-
Using Cloud-GPU as an eGPU?
check out https://www.juicelabs.co/
-
Looking for a Bitfusion replacement? I think I may have found something really cool... Juice - which not only supports CUDA but all the graphical APIs
So our lab had been using Bitfusion until recently for a large number of VM deployments. With Bitfusion support coming to an end, we were talking about solutions and did some Googleing around GPU-over-IP and stumbled across these guys: www.juicelabs.co
-
is it possible to install Automatic1111 and manage it like locally, but using a shared gpu service such as runpod.io/endpoints?
The Juice may help passing gpu over IP, I haven't tried it yet though
-
ClosedAI strikes again
Even then you can always use Juice. https://www.juicelabs.co/
-
Multiple inference, single remote GPU of Stable Diffusion
The functionality to do this today is available via our community edition here: https://github.com/Juice-Labs/Juice-Labs/wiki
client
-
Ollama releases OpenAI API compatibility
- While keeping power utilization below X
They will take the exported model and dynamically deploy the package to a triton instance running on your actual inference serving hardware, then generate requests to meet your SLAs to come up with the optimal model configuration. You even get exported metrics and pretty reports for every configuration used/attempted. You can take the same exported package, change the SLA params, and it will automatically re-generate the configuration for you.
- Performance on a completely different level. TensorRT-LLM especially is extremely new and very early but already at high scale you can start to see > 10k RPS on a single node.
- gRPC support. Especially when using pre/post processing, ensemble, etc you can configure clients programmatically to use the individual models or the ensemble chain (as one example). This opens up a very wide range of powerful architecture options that simply aren't available anywhere else. gRPC could probably be thought of as AsyncLLMEngine, it can abstract actual input/output or expose raw in/out so models, tokenizers, decoders, etc can send/receive raw data/numpy/tensors.
- DALI support[5]. Combined with everything above, you can add DALI in the processing chain to do things like take input image/audio/etc, copy to GPU once, GPU accelerate scaling/conversion/resampling/whatever, and get output.
vLLM and HF TGI are very cool and I use them in certain cases. The fact you can give them a HF model and they just fire up with a single command and offer good performance is very impressive but there are an untold number of reasons these providers use Triton. It's in a class of its own.
[0] - https://mistral.ai/news/la-plateforme/
[1] - https://www.cloudflare.com/press-releases/2023/cloudflare-po...
[2] - https://www.nvidia.com/en-us/case-studies/amazon-accelerates...
[3] - https://github.com/triton-inference-server/model_navigator
[4] - https://github.com/triton-inference-server/client/blob/main/...
[5] - https://github.com/triton-inference-server/dali_backend
-
Show HN: Software for Remote GPU-over-IP
Inference servers essentially turn a model running on CPU and/or GPU hardware into a microservice.
Many of them support the kserve API standard[0] that supports everything from model loading/unloading to (of course) inference requests across models, versions, frameworks, etc.
So in the case of Triton[1] you can have any number of different TensorFlow/torch/tensorrt/onnx/etc models, versions, and variants. You can have one or more Triton instances running on hardware with access to local GPUs (for this example). Then you can put standard REST and or grpc load balancers (or whatever you want) in front of them, hit them via another API, whatever.
Now all your applications need to do to perform inference is do an HTTP POST (or use a client[2]) for model input, Triton runs it on a GPU (or CPU if you want), and you get back whatever the model output is.
Not a sales pitch for Triton but it (like some others) can also do things like dynamic batching with QoS parameters, automated model profiling and performance optimization[3], really granular control over resources, response caching, python middleware for application/biz logic, accelerated media processing with Nvidia DALI, all kinds of stuff.
[0] - https://github.com/kserve/kserve
[1] - https://github.com/triton-inference-server/server
[2] - https://github.com/triton-inference-server/client
[3] - https://github.com/triton-inference-server/model_analyzer
What are some alternatives?
Easy-GPU-P - A Project dedicated to making GPU Partitioning on Windows easier!
YetAnotherChatUI - Yet another ChatGPT UI. Bring your own API key.
kserve - Standardized Serverless ML Inference Platform on Kubernetes
vgpu_unlock - Unlock vGPU functionality for consumer grade GPUs.
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
lookma - LookMa connects Android devices to locally-run LLMs
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality
dali_backend - The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
llamafile - Distribute and run LLMs with a single file.