flamethrower
ArrayFire
Our great sponsors
- InfluxDB - Collect and Analyze Billions of Data Points in Real Time
- Onboard AI - Learn any GitHub repo in 59 seconds
- Revelo Payroll - Free Global Payroll designed for tech teams
- SonarCloud - Analyze your C and C++ projects with just one click.
flamethrower | ArrayFire | |
---|---|---|
3 | 6 | |
293 | 4,227 | |
0.3% | 0.4% | |
0.0 | 6.6 | |
5 months ago | 12 days ago | |
C++ | C++ | |
Apache License 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
flamethrower
We haven't tracked posts mentioning flamethrower yet.
Tracking mentions began in Dec 2020.
ArrayFire
-
Learn WebGPU
Loads of people have stated why easy GPU interfaces are difficult to create, but we solve many difficult things all the time.
Ultimately I think CPUs are just satisfactory for the vast vast majority of workloads. Servers rarely come with any GPUs to speak of. The ecosystem around GPUs is unattractive. CPUs have SIMD instructions that can help. There are so many reasons not to use GPUs. By the time anyone seriously considers using GPUs they're, in my imagination, typically seriously starved for performance, and looking to control as much of the execution details as possible. GPU programmers don't want an automagic solution.
So I think the demand for easy GPU interfaces is just very weak, and therefore no effort has taken off. The amount of work needed to make it as easy to use as CPUs is massive, and the only reason anyone would even attempt to take this on is to lock you in to expensive hardware (see CUDA).
For a practical suggestion, have you taken a look at https://arrayfire.com/ ? It can run on both CUDA and OpenCL, and it has C++, Rust and Python bindings.
-
[D] Deep Learning Framework for C++.
Low-overhead — not our goal, but Flashlight is on par with or outperforming most other ML/DL frameworks with its ArrayFire reference tensor implementation, especially on nonstandard setups where framework overhead matters
-
[D] Neural Networks using a generic GPU framework
Looking for frameworks with Julia + OpenCL I found array fire. It seems quite good, bonus points for rust bindings. I will keep looking for more, Julia completely fell off my radar.
What are some alternatives?
Thrust - The C++ parallel algorithms library.
Boost.Compute - A C++ GPU Computing Library for OpenCL
VexCL - VexCL is a C++ vector expression template library for OpenCL/CUDA/OpenMP
CUB - THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.
Taskflow - A General-purpose Parallel and Heterogeneous Task Programming System
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
moderngpu - Patterns and behaviors for GPU computing
HPX - The C++ Standard Library for Parallelism and Concurrency
moodycamel - A fast multi-producer, multi-consumer lock-free concurrent queue for C++11
stdgpu - stdgpu: Efficient STL-like Data Structures on the GPU
RaftLib - The RaftLib C++ library, streaming/dataflow concurrency via C++ iostream-like operators
C++ Actor Framework - An Open Source Implementation of the Actor Model in C++