LibreCuda | ZLUDA | |
---|---|---|
2 | 41 | |
1,028 | 10,976 | |
1.3% | 4.4% | |
8.5 | 7.0 | |
4 months ago | 7 days ago | |
C | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LibreCuda
-
Show HN: Attaching to a Virtual GPU over TCP
sorry, I didn't mean nvapi, I meant rmapi.
I bet you saw this https://github.com/mikex86/LibreCuda
they implemented the cuda driver by calling into rmapi.
My understanding is if there is a remote rmapi, other user mode drivers should work out of the box?
- LibreCUDA – launch CUDA code on Nvidia GPUs without the proprietary CUDA runtime
ZLUDA
- Making AMD GPUs competitive for LLM inference
-
LibreCUDA – launch CUDA code on Nvidia GPUs without the proprietary CUDA runtime
I wanted it to be a surprise, but one of those features was support for NVIDIA GameWorks. I got it working in Batman: Arkham Knight, but I never finished it, and now that code will never see the light of the day:
So if I understand it correctly there is something in the works
https://github.com/vosen/ZLUDA
-
Open-Source AMD GPU Implementation of CUDA "Zluda" Has Been Taken Down
... or mirror a recently updated fork. :)
https://github.com/vosen/ZLUDA/forks?include=active&page=1&p...
It seems the last commit to master was 9e56862.
-
Chipmaker Intel to cut 15,000 jobs as tries to revive its business
TL;DR: How much of this is a potential class-action[1] and how much of this is failure to deliver on AI?
Am I missing something? On one hand, I think I get it: Intel hasn't been a GPU company historically. On the other, this quotes seems suspicious given Intel's gen 13 and 14 cores have issues:
> Simply put, we must align our cost structure with our new operating model and fundamentally change the way we operate
At the same time, my understanding is AMD seems ahead[2] of Intel in AI / CUDA support. This quote seems to be a nod to that without saying much else:
> Our revenues have not grown as expected — and we’ve yet to fully benefit from powerful trends, like AI. Our costs are too high, our margins are too low.
Before anyone point oout "Intel® Extension for Pytorch*" exists[3]:
1. That seems to be the official name (what?)
2. Their installation homepage seems a little convoluted[4]
[1]: https://www.pcmag.com/news/intel-faces-potential-class-actio...
[2]: https://github.com/vosen/ZLUDA
[3]: https://intel.github.io/intel-extension-for-pytorch/xpu/late...
[4]: https://intel.github.io/intel-extension-for-pytorch/xpu/late...
-
Open-source project ZLUDA lets CUDA apps run on AMD GPUs
It now supports AMD GPUs since 3 weeks ago, check the latest commit at the repo:
https://github.com/vosen/ZLUDA
The article also mentions exactly this fact.
-
Nvidia bans using translation layers for CUDA software
Looks like nvidia is trying to keep the lynchpin of their entire business model from crumbling underneath them. ZLUDA lets you run unmodified CUDA applications with near-native performance on AMD GPUs.
https://github.com/vosen/ZLUDA
With Triton looking to eclipse CUDA entirely, im not sure this prohibition does anything more than placate casual shareholders.
-
Nvidia bans using translation layers for CUDA software to run on other chips
>Dark API functions are reverse-engineered and implemented by ZLUDA on a case-by-case basis once we observe an application making use of it.
https://github.com/vosen/ZLUDA/blob/master/ARCHITECTURE.md
-
Nvidia hits $2T valuation as AI frenzy grips Wall Street
> I know AMD have their competition, but their GPU software division keeps tripping over itself.
They are actively stepping on every rake there is. Eg they just stopped supporting the drop-in-cuda project everyone is waiting for, due to there being "no business-case for CUDA on AMD GPUs" [0].
[0] https://github.com/vosen/ZLUDA?tab=readme-ov-file#faq
-
Nvidia Is Now More Valuable Than Amazon and Google
https://github.com/vosen/ZLUDA
They still funded it and it was created.
- Debian on Apple hardware (M1 and later)
What are some alternatives?
qCUDA - qCUDA: GPGPU Virtualization at a New API Remoting Method with Para-virtualization
InvokeAI - Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
ffmpeg-over-ip - Connect to remote ffmpeg servers
hip - HIP: C++ Heterogeneous-Compute Interface for Portability
Juice-Labs - Juice Community Version Public Release
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]