server
ROCm
Our great sponsors
server | ROCm | |
---|---|---|
24 | 198 | |
7,314 | 3,637 | |
5.4% | - | |
9.5 | 0.0 | |
2 days ago | 4 months ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
server
- FLaNK Weekly 08 Jan 2024
- Is there any open source app to load a model and expose API like OpenAI?
- "A matching Triton is not available"
-
best way to serve llama V2 (llama.cpp VS triton VS HF text generation inference)
I am wondering what is the best / most cost-efficient way to serve llama V2. - llama.cpp (is it production ready or just for playing around?) ? - Triton inference server ? - HF text generation inference ?
- Triton Inference Server - Backend
-
Single RTX 3080 or two RTX 3060s for deep learning inference?
For inference of CNNs, memory should really not be an issue. If it is a software engineering problem, not a hardware issue. FP16 or Int8 for weights is fine and weight size won’t increase due to the high resolution. And during inference memory used for hidden layer tensors can be reused as soon as the last consumer layer has been processed. You likely using something that is designed for training for inference and that blows up the memory requirement, or if you are using TensorRT or something like that, you need to be careful to avoid that every tasks loads their own copy of the library code into the GPU. Maybe look at https://github.com/triton-inference-server/server
-
Machine Learning Inference Server in Rust?
I am looking for something like [Triton Inference Server](https://github.com/triton-inference-server/server) or [TFX Serving](https://www.tensorflow.org/tfx/guide/serving), but in Rust. I came across [Orkon](https://github.com/vertexclique/orkhon) which seems to be dormant and a bunch of examples off of the [Awesome-Rust-MachineLearning](https://github.com/vaaaaanquish/Awesome-Rust-MachineLearning)
-
Multi-model serving options
You've already mentioned Seldon Core which is well worth looking at but if you're just after the raw multi-model serving aspect rather than a fully-fledged deployment framework you should maybe take a look at the individual inference servers: Triton Inference Server and MLServer both support multi-model serving for a wide variety of frameworks (and custom python models). MLServer might be a better option as it has an MLFlow runtime but only you will be able to decide that. There also might be other inference servers that do MMS that I'm not aware of.
-
I mean,.. we COULD just make our own lol
[1] https://docs.nvidia.com/launchpad/ai/chatbot/latest/chatbot-triton-overview.html[2] https://github.com/triton-inference-server/server[3] https://neptune.ai/blog/deploying-ml-models-on-gpu-with-kyle-morris[4] https://thechief.io/c/editorial/comparison-cloud-gpu-providers/[5] https://geekflare.com/best-cloud-gpu-platforms/
-
Why TensorFlow for Python is dying a slow death
"TensorFlow has the better deployment infrastructure"
Tensorflow Serving is nice in that it's so tightly integrated with Tensorflow. As usual that goes both ways. It's so tightly coupled to Tensorflow if the mlops side of the solution is using Tensorflow Serving you're going to get "trapped" in the Tensorflow ecosystem (essentially).
For pytorch models (and just about anything else) I've been really enjoying Nvidia Triton Server[0]. Of course it further entrenches Nvidia and CUDA in the space (although you can execute models CPU only) but for a deployment today and the foreseeable future you're almost certainly going to be using a CUDA stack anyway.
Triton Server is very impressive and I'm always surprised to see how relatively niche it is.
[0] - https://github.com/triton-inference-server/server
ROCm
-
AMD May Get Across the CUDA Moat
Yep, did exactly that. IMO he threw a fit, even though AMD was working with him squashing bugs. https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...
- ROCm 5.7.0 Release
-
ROCm Is AMD's #1 Priority, Executive Says
Ok, I wonder what's wrong. maybe it's this? https://stackoverflow.com/questions/4959621/error-1001-in-cl...
Nope. Anything about this on the arch wiki? Nope
This bug report[2] from 2021? Maybe I need to update my groups.
[2]: https://github.com/RadeonOpenCompute/ROCm/issues/1411
$ ls -la /dev/kfd
-
Simplifying GPU Application Development with HMM
HMM is, I believe, a Linux feature.
AMD added HMM support in ROCm 5.0 according to this: https://github.com/RadeonOpenCompute/ROCm/blob/develop/CHANG...
-
AMD Ryzen APU turned into a 16GB VRAM GPU and it can run Stable Diffusion
Woot AMD now supports APU? I sold my notebook as i hit a wall when trying rocm [1] Is there a list oft Wirkung apu's ?
[1] https://github.com/RadeonOpenCompute/ROCm/issues/1587
-
Nvidia's CUDA Monopoly
Last I heard he's abandoned working with AMD products.
https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...
-
Nvidia H100 GPUs: Supply and Demand
They're talking about the meltdown he had on stream [1] (in front of the mentioned pirate flag), that ended with him saying he'd stop using AMD hardware [2]. He recanted this two weeks after talking with AMD [3].
Maybe he'll succeed, but this definitely doesn't scream stability to me. I'd be wary of investing money into his ventures (but then I'm not a VC, so what do I know).
[1] https://www.youtube.com/watch?v=Mr0rWJhv9jU
[2] https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...
[3] https://twitter.com/realGeorgeHotz/status/166980346408248934...
-
Open or closed source Nvidia driver?
As for rocm support on consumer devices, AMD wont even clarify what devices are supported. https://github.com/RadeonOpenCompute/ROCm/pull/1738
-
Why Nvidia Keeps Winning: The Rise of an AI Giant
He flamed out, then is back after Lisa Su called him (lmao)
https://geohot.github.io/blog/jekyll/update/2023/05/24/the-t...
https://www.youtube.com/watch?v=Mr0rWJhv9jU
https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...
https://geohot.github.io/blog/jekyll/update/2023/06/07/a-div...
On a personal level that youtube doesn't make him come off looking that good... like people are trying to get patches to him and generally soothe him/damage control and he's just being a bit of a manchild. And it sounds like that's the general course of events around a lot of his "efforts".
On the other hand he's not wrong either, having this private build inside AMD and not even validating official, supported configurations for the officially supported non-private builds they show to the world isn't a good look, and that's just the very start of the problems around ROCm. AMD's OpenCL runtime was never stable or good either and every experience I've heard with it was "we spent so much time fighting AMD-specific runtime bugs and specs jank that what we ended up with was essentially vendor-proprietary anyway".
On the other other hand, it sounds like AMD know this is a mess and has some big stability/maturity improvements in the pipeline. It seems clear from some of the smoke coming out of the building that they're cooking on more general ROCm support for RDNA cards, and generally working to patch the maturity and stability issues he's talking about. I hate the "wait for drivers/new software release bro it's gonna fix everything" that surrounds AMD products but in this case I'm at least hopeful they seem to understand the problem, even if it's completely absurdly late.
Some of what he was viewing as "the process happening in secret" was likely people doing rush patches on the latest build to accommodate him, and he comes off as berating them over it. Again, like, that stream just comes off as "mercurial manchild" not coding genius. And everyone knew the driver situation is bad, that's why there's notionally alpha for him to realize here in the first place. He's bumping into moneymakers, and getting mad about it.
-
Disable "SetTensor/CopyTensor" console logging.
I tried to train another model using InceptionResNetV2 and the same issues happens. Also, this happens even using the model.predict() method if using the GPU. Probably this is an issue related to the AMD Radeon RX 6700 XT or some mine misconfiguration. System Inormation: ArchLinux 6.1.32-1-lts - AMD Radeon RX 6700 XT - gfx1031 Opened issues: - https://github.com/RadeonOpenCompute/ROCm/issues/2250 - https://github.com/ROCmSoftwarePlatform/tensorflow-upstream/issues/2125
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
tensorflow-directml - Fork of TensorFlow accelerated by DirectML
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
rocm-arch - A collection of Arch Linux PKGBUILDS for the ROCm platform
Triton - Triton is a dynamic binary analysis library. Build your own program analysis tools, automate your reverse engineering, perform software verification or just emulate code.
oneAPI.jl - Julia support for the oneAPI programming toolkit.
Megatron-LM - Ongoing research training transformer models at scale
SHARK - SHARK - High Performance Machine Learning Distribution
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
llama.cpp - LLM inference in C/C++