rivi-loader
Orochi
rivi-loader | Orochi | |
---|---|---|
5 | 5 | |
16 | 188 | |
- | 7.4% | |
4.1 | 7.7 | |
8 months ago | 10 days ago | |
Rust | C++ | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rivi-loader
-
Any good resources for purely computational work?
If you use Rust, I have been developing a similar project to kompute: https://github.com/periferia-labs/rivi-loader
-
How are Vulkan, CUDA, Triton and all other things connected?
For cross-platform support look at WebGPU and Vulkan (e.g,: [0] [1]. Essentially, you would need to write the func in WGSL or GLSL, HLSL or MSL. Each of these can be cross-compiled to SPIR-V (what Vulkan needs) with cross-compilers such as spirv-cross and naga.
-
WebRTC.rs reached an important milestone in connectivity!
Looking to integrate the webrtc-rs into the Rust native version: https://github.com/periferia-labs/rivi-loader
- Vulkan-based program loader for GPGPU applications in Rust
- Vulkan-based program loader for GPGPU applications
Orochi
-
Blender 3.6 (huge AMD gains with HIP RT) reaches Beta Phase 3
While you're waiting for the HIP SDK to release, check out Orochi as an alternative https://github.com/GPUOpen-LibrariesAndSDKs/Orochi
-
AMD Posts Patch Enabling Vega APU/GPU Support For Blender's HIP Backend
This isn't a full-fledged SDK but if you develop using the driver/runtime API and nvrtc on Linux, you could certainly use this library to make it run on Windows https://github.com/GPUOpen-LibrariesAndSDKs/Orochi Bonus - it also lets you compile a single binary that runs on both CUDA and HIP!
-
First time in 2 years I was able to get Blender running with an AMD GPU on Linux!
You can't run CUDA binaries directly. But you can use a wrapper library like Orochi to run both CUDA and HIP using a single binary that dynamically links with CUDA/HIP libraries at runtime https://github.com/GPUOpen-LibrariesAndSDKs/Orochi
-
How are Vulkan, CUDA, Triton and all other things connected?
I stumbled across orochi from AMD while looking for their FSR2.0 implementation, which basically boils down to being a wrapper over Cuda and HIP. I don't know if it is still maintained or functional, but heres the link if anyone is interested: https://github.com/GPUOpen-LibrariesAndSDKs/Orochi
- Orochi – dynamic loading of HIP/CUDA from a single binary
What are some alternatives?
SPIRV-Cross - SPIRV-Cross is a practical tool and library for performing reflection on SPIR-V and disassembling SPIR-V back to high level languages.
Vulkan - Examples and demos for the new Vulkan API
Stable-Diffusion-ONNX-FP16 - Example code and documentation on how to get Stable Diffusion running with ONNX FP16 models on DirectML. Can run accelerated on all DirectML supported cards including AMD and Intel.
neovide - No Nonsense Neovim Client in Rust
kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
wgpu - Cross-platform, safe, pure-rust graphics api.
rocm-arch - A collection of Arch Linux PKGBUILDS for the ROCm platform
naga - Universal shader translation in Rust
vulkano - Safe and rich Rust wrapper around the Vulkan API