opencl3
leaf
opencl3 | leaf | |
---|---|---|
4 | 2 | |
91 | 5,552 | |
- | -0.0% | |
4.7 | 0.0 | |
about 1 month ago | about 1 month ago | |
Rust | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
opencl3
- [Rust] État de GPGPU en 2022
-
State of GPGPU in 2022
Rust-GPU look promising, also Rust-CUDA, also I see OpenCL 3.0 wrapper
-
[D] Why does AMD do so much less work in AI than NVIDIA?
If I would build something like this again, I'd probably recommend the opencl3 bindings by Ken Barker - I played around with them a little for some hackathon projects, and the bindings seem pretty complete when it comes to kernel, scheduling and distribution of tasks.
-
Is the ocl crate unmaintained?
I found a OpenCL 3 project, when I searched for recently updated OpenCL Rust projects. Last commit was about 2 month ago, so perhaps it's still being maintained. It's perhaps better to fork it than ocl, because it supports the latest version of OpenCL.
leaf
-
[D] Why does AMD do so much less work in AI than NVIDIA?
I used a lot of the dependencies behind the leaf framework which was abandoned by its authors a while back due to funding issues, as I implemented it in Rust and most bindings were maintained while the leaf framework itself wasn't anymore.
-
AMD Demonstrates Stacked 3D V-Cache Technology: 192 MB at 2 TB/SEC
I tried to create a ML framework[0] that would work on both CUDA and OpenCL (and natively on the CPU) around 2015/2016, which included creating FFI wrappers for both CUDA and OpenCL. This is where my experience on the subject (and my contempt for NVIDIA) comes from.
Me memory isn't perfect, but IIRC the situation was roughly the following: We were quite short on resources (both devtime and money), which meant that we had to choose our scope wisely. Optimally we would have implemented both CUDA and OpenCL 2.0, but we had to settle for OpenCL 1.2 (which offered reduced performance, but was "good enough" for inference). IIRC OpenCL 2.0 was very very similar in what capabilities it assumed and offered to the CUDA version at the time, and cards like the GTX Titan X had "compute capabilities" that supported features like shared virtual memory between CPU and GPU in CUDA at the time. In fact the advances around memory management (and async copying) that were present in CUDA and not in OpenCL 1.x were the main source for the performance differences between the two.
From everything that I can tell at that point in time, if NVIDIA would have wanted to support OpenCL 2.0 they could have done so based on technical requirements. What the reason for not doing so is, is just pure speculation (lack of internal resources due to focusing on devtools?), but to me it always looked like they were using the edge they got via their proprietary libraries like cuDNN to get a foot into the field of ML and then purposefully neglected OpenCL to prevent any competitors from catching up. Classic Embrace, Extend, Extinguish.
[0]: https://github.com/autumnai/leaf
What are some alternatives?
Rust-CUDA - Ecosystem of libraries and tools for writing and executing fast GPU code fully in Rust.
rusty-machine - Machine Learning library for Rust
Vulkan-ValidationLayers - Vulkan Validation Layers (VVL)
rust - Rust language bindings for TensorFlow
rustlearn - Machine learning crate for Rust
CNTK - Wrapper around Microsoft CNTK library
rust - Empowering everyone to build reliable and efficient software.
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/