HIP
nvidia-patch
HIP | nvidia-patch | |
---|---|---|
30 | 309 | |
3,462 | 2,975 | |
1.5% | - | |
8.9 | 8.5 | |
3 days ago | 6 days ago | |
C++ | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
HIP
-
Porting HPC Applications to AMD Instinct MI300A Using Unified Memory and OpenMP
>ROCm or HIP?
I'm not sure that's even the right question to ask. Afaik ROCm is the name of that entire tech stack and HIP is AMD's equivalent to CUDA C++ (they basically replicated the API and replaced every "CUDA" by "hip", they have functions called "hipmalloc" and "hipmemcpy").
The repository is located at https://github.com/ROCm/HIP.
- Hip: Runtime API and Kernel Language for Portable Apps for AMD and Nvidia GPUs
-
Open-source project ZLUDA lets CUDA apps run on AMD GPUs
Is it perhaps because they want people to use HIP?
> HIP is very thin and has little or no performance impact over coding directly in CUDA mode.
> The HIPIFY tools automatically convert source from CUDA to HIP.
1. https://github.com/ROCm/HIP
-
AMD's Next GPU Is a 3D-Integrated Superchip
AMD has released HIP and a tool called HIPIFY which kind of behaves like this but at the source level¹. Rather than try and just translate CUDA to work on AMD compute they are more focused on higher level tooling.
Currently they seem to have a particular focus on AI frameworks and tools like PyTorch/Tensorflow/ONNX. They have sponsored and helped with a lot of PyTorch development for example, so PyTorch support for AMD is much better than it was this time last year².
¹(https://github.com/ROCm/HIP)
²(https://pytorch.org/blog/experience-power-pytorch-2.0/)
-
Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
> what would be the point for someone to add ROCm support to various pieces of software which currently require CUDA
It isn't just old cards though, CUDA is a point of centralization on a single provider during a time when access to that providers higher end cards isn't even available and that is causing people to look elsewhere.
ROCm supports CUDA through the included HIP projects...
https://github.com/ROCm/HIP
https://github.com/ROCm/HIPCC
https://github.com/ROCm/HIPIFY
The later will regex replace your CUDA methods with HIP methods. If it is as easy as running hipify on your codebase (or just coding to HIP apis), it certainly makes sense to do so.
-
Nvidia on the Mountaintop
AMD's equivalent is HIP [1], for sufficiently flexible definitions of "equivalent". I can't speak to how complete/correct/performant it is (I'm just a guy running tutorial/toy-level ML stuff on an RDNA1 card), but part of AMD's problem is that it might not practically matter how well they do this because the broader ecosystem support specifically for the CUDA stack is so entrenched.
[1] https://github.com/ROCm-Developer-Tools/HIP
- Stable Diffusion in pure C/C++
- Would love to hear your information and knowledge to simplify my understanding on AMD's positioning in the AI market
-
Ask HN: C++ still dominates on GPUs, why not Rust?
From what I know, modern GPUs are still programmed with C++ exclusively. See CUDA [0] for Nvidia and ROCm [1] for AMD.
Why is this? Why Rust is not loved there?
[0] https://docs.nvidia.com/cuda/
[1] https://github.com/ROCm-Developer-Tools/HIP
-
[P] RWKV C++ Cuda library with no dependencies, no torch, and no python
Go ahead and try to ship ROCm code that works on multiple consumer graphics cards on Linux, MacOS, and Windows. As an example of how much AMD cares about it, the installation notes linked to in the readme returns a 404.
nvidia-patch
-
Do I need to have a beefy PC to transcode 4k? Or can I just buy my brother an Nvidia shield pro and setup a cheap server on my end?
This can be patched out. https://github.com/keylase/nvidia-patch
-
Transcoding 4K HDR tone mapping
NVIDIA Corporation GA106 [GeForce RTX 3060] and I applied the patch here https://github.com/keylase/nvidia-patch
-
Linux 6.6 to Protect Against Illicit Behavior of Nvidia Proprietary Driver
> CUDA, and pretty much all optimization(hacks) done to run games better
And arbitrary limitations implemented at the driver level to force you to purchase their enterprise GPUs, see https://github.com/keylase/nvidia-patch#nvenc-and-nvfbc-patc...
-
GPU Guide (For AI Use-Cases)
Nvidia has no motivation to make a consumer card with lots of VRAM, that's basically the only (relevant) separator between the GeForce family and the Quadro lineup.
There are restrictions on NVENC streams with consumer cards, but that has been a solved problem for a while [0].
If they were to make a consumer card with more VRAM, it would immediately undercut their own Quadro/Tesla lineup, which cost substantially more. I don't see a reason for them to do it.
0: https://github.com/keylase/nvidia-patch
-
Can't hardware transcode mor than 5 at a time even after all the required changes
I have never had to do the session limit bump thing from the last link. I have a 3090 as well and simply did the initial unlock, which worked fine. I would reinstall fresh drivers from Nvidia, making sure you install the newest one that is supported by the unlock tool (536.40 as of this post, the GitHub for the patch has links to the drivers - https://github.com/keylase/nvidia-patch/tree/master/win)
- Can you flash any consumer version Nvidia card to remove the streaming limits?
-
Can my GPU transcode?
Aren't these Quadro versions. The patch here. https://github.com/keylase/nvidia-patch supports Quadro versions of you click on the win clickable.
-
Let's have a talk - Guide to Choosing the Best Plex Server for You
Second, the GPU. The GPU is probably as important as the CPU, and in some cases more important, and when we talk about GPUs we will primarily talk about Nvidia GPUs as they are officialy supported by the Plex team. NVIDIA GPUs are important for Plex hardware transcoding due to their dedicated video encoding/decoding units, superior performance, wide codec support, improved video quality, reduced CPU load, power efficiency. They offer a powerful hardware acceleration solution that can greatly enhance the transcoding capabilities of a Plex server. It's also important to note that Nvidia GPUs require a patch to unlock the number of HW transcoding streams. Dedicated GPUs are large pieces of hardware and have their place in desktop PCs. However, they can also be used with mini-PCs by using an external GPU enclosure.
-
What does this Max. 3 concurrent stream cap mean anway?
As there's no NVENC patch available (yet) for the Beta driver branch - referring to this one: https://github.com/keylase/nvidia-patch - which can lift the limits of HW transcoding, I was now wondering a little, as I can see 5 (hw) streams on Plex, which actually shouldn't/cannot be the case no?
-
Is there somewhere that lists Nvidia GPUs.
I haven’t done this yet but there is a patch on GitHub that removes the limitation for consumer GPUs. Makes lower end cards more attractive for this type of work
What are some alternatives?
AdaptiveCpp - Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!
vgpu_unlock - Unlock vGPU functionality for consumer grade GPUs.
ZLUDA - CUDA on AMD GPUs
nvlax - Future-proof NvENC & NvFBC patcher (Linux/Windows)
futhark - :boom::computer::boom: A data-parallel functional programming language
Sunshine - Self-hosted game stream host for Moonlight.
kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
wlroots - A modular Wayland compositor library
ginkgo - Numerical linear algebra software package
unmanic - Unmanic - Library Optimiser
rocm-arch - A collection of Arch Linux PKGBUILDS for the ROCm platform
Proxmox-Nvidia-LXC- - how to create an Proxmox LXC in 6.2-1