KernelAbstractions.jl
futhark
Our great sponsors
KernelAbstractions.jl | futhark | |
---|---|---|
4 | 52 | |
331 | 2,291 | |
3.0% | 2.2% | |
8.0 | 9.8 | |
12 days ago | 7 days ago | |
Julia | Haskell | |
MIT License | ISC License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
KernelAbstractions.jl
-
Why is AMD leaving ML to nVidia?
For myself, I use Julia to write my own software (that is run on AMD supercomputer) on Fedora system, using 6800XT. For my experience, everything worked nicely. To install you need to install rocm-opencl package with dnf, AMD Julia package (AMDGPU.jl), add yourself to video group and you are good to go. Also, Julia's KernelAbstractions.jl is a good to have, when writing portable code.
-
Generic GPU Kernels
>Higher level abstractions
like these?
https://github.com/JuliaGPU/KernelAbstractions.jl
-
Cuda.jl v3.3: union types, debug info, graph APIs
For kernel programming, https://github.com/JuliaGPU/KernelAbstractions.jl (shortened to KA) is what the JuliaGPU team has been developing as a unified programming interface for GPUs of any flavor. It's not significantly different from the (basically identical) interfaces exposed by CUDA.jl and AMDGPU.jl, so it's easy to transition to. I think the event system in KA is also far superior to CUDA's native synchronization system, since it allows one to easily express graphs of dependencies between kernels and data transfers.
futhark
-
What downsides exist to Futhark? Seems almost too good to be true?
Why Futhark? (futhark-lang.org)
-
GPU Programming: When, Why and How?
There is no on-going work to support Metal apart from the work done by Miles. There's an old issue about it: https://github.com/diku-dk/futhark/issues/853#issuecomment-5...
-
Is Parallel Programming Hard, and, If So, What Can You Do About It? v2023.06.11a
Functional programming can be a great way to handle parallel programming in a sane way. See the Futhark language [1], for example, that accepts high-level constructs like map and convert them to the appropriate machine code, either on the CPU or the GPU.
[1] https://futhark-lang.org/
-
Is there a programming language that will blow my mind?
Futhark - use a functional language to program the gpu
-
Does This Language Exist?
You might want to look into Futhark, although it's mainly designed for writing GPU code.
- Learn WebGPU
-
Two-tier programming language
Futhark https://futhark-lang.org/
- Best book on writing an optimizing compiler (inlining, types, abstract interpretation)?
- Functional GPU programming: what are alternatives or generalizations of the idea of "number of cycles must be known at compile time"?
- APL: An Array Oriented Programming Language (2018)
What are some alternatives?
GPUCompiler.jl - Reusable compiler infrastructure for Julia GPU backends.
arrayfire-rust - Rust wrapper for ArrayFire
ROCm - AMD ROCmâ„¢ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
dex-lang - Research language for array processing in the Haskell/ML family
AMDGPU.jl - AMD GPU (ROCm) programming in Julia
Halide - a language for fast, portable data-parallel computation
StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)
julia - The Julia Programming Language
oneAPI.jl - Julia support for the oneAPI programming toolkit.
BQN - An APL-like programming language. Self-hosted!
Agents.jl - Agent-based modeling framework in Julia
kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.