Bolt
Taskflow
Bolt | Taskflow | |
---|---|---|
3 | 24 | |
370 | 9,577 | |
- | 1.3% | |
0.0 | 7.9 | |
about 8 years ago | 10 days ago | |
C++ | C++ | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Bolt
-
AMD's CDNA 3 Compute Architecture
this is frankly starting to sound a lot like the ridiculous "blue bubbles" discourse.
AMD's products have generally failed to catch traction because their implementations are halfassed and buggy and incomplete (despite promising more features, these are often paper features or career-oriented development from now-departed developers). all of the same "developer B" stuff from openGL really applies to openCL as well.
http://richg42.blogspot.com/2014/05/the-truth-on-opengl-driv...
AMD has left a trail of abandoned code and disappointed developers in their wake. These two repos are the same thing for AMD's ecosystem and NVIDIA's ecosystem, how do you think the support story compares?
https://github.com/HSA-Libraries/Bolt
https://github.com/NVIDIA/thrust
in the last few years they have (once again) dumped everything and started over, ROCm supported essentially no consumer cards and rotated support rapidly even in the CDNA world. It offers no binary compatibility support story, it has to be compiled for specific chips within a generation, not even just "RDNA3" but "Navi 31 specifically". Etc etc. And nobody with consumer cards could access it until like, six months ago, and that still is only on windows, consumer cards are not even supported on linux (!).
https://geohot.github.io/blog/jekyll/update/2023/06/07/a-div...
This is on top of the actual problems that still remain, as geohot found out. Installing ROCm is a several-hour process that will involve debugging the platform just to get it to install, and then you will probably find that the actual code demos segfault when you run them.
AMD's development processes are not really open, and actual development is silo'd inside the company with quarterly code dumps outside. The current code is not guaranteed to run on the actual driver itself, they do not test it even in the supported configurations.
it hasn't got traction because it's a low-quality product and nobody can even access it and run it anyway.
-
High quality OpenCL compute libraries
what I'm saying is there are options on that that make it more likely for what you're looking to exist; I haven't surveyed the existing libs as much but without templates and the integration of single source you're not bound to find libraries to exist; it's why opencl doesn't have those things really; however I name droped the amd targetted OpenCL thrust equivalent - https://github.com/HSA-Libraries/Bolt - I don't know if you can really achieve opencl multi-accelerator compatibility with it though.
-
Nvidia in the Valley
OpenCL had a bit of a "second-mover curse" where instead of trying to solve one problem (GPGPU acceleration) it tried to solve everything (a generalized framework for heterogeneous dispatch) and it just kinda sucks to actually use. It's not that it's slower or faster, in principle it should be the same speed when dispatched to the hardware (+/- any C/C++ optimization gotchas of course), but it just requires an obscene amount of boilerplate to "draw the first triangle" (or, launch the first kernel), much like Vulkan.
HIP was supposed to rectify this, but now you're buying into AMD's custom language and its limitations... and there are limitations, things that CUDA can do that HIP can't (texture unit access was an early one - and texture units aren't just for texturing, they're for coalescing all kinds of 2d/3d/higher-dimensional memory access). And AMD has a history of abandoning these projects after a couple years and leaving them behind and unsupported... like their Thrust framework counterpart, Bolt, which hasn't been updated in 8 years now.
https://github.com/HSA-Libraries/Bolt
The old bit about "Vendor B" leaving behind a "trail of projects designed to pad resumes and show progress to middle managers" still reigns absolutely true with AMD. AMD has a big uphill climb in general to shake this reputation about being completely unserious with their software... and I'm not even talking about drivers here.
http://richg42.blogspot.com/2014/05/the-truth-on-opengl-driv...
Taskflow
-
Improvements of Clojure in his time
For parallel programming nowadays, personally I reach for C++ Taskflow when I really care about performance, or a mix of core.async and running multiple load balanced instances when I’m doing more traditional web backend stuff in Clojure.
- Taskflow: A General-Purpose Parallel and Heterogeneous Task Programming System
-
How to go from intermediate to advance in C++?
Also, you can take a look to good libraries. The problem is that very often libraries are heavily templated, so It could be hard. For example, I like the style of the Taskflow library, I think is very clear, is relatively small, while makes use of more advanced techniques: https://github.com/taskflow/taskflow
-
gcl v1.1 released - Graph Concurrent Library for C++
Cool. Thanks! How does it compare to taskflow?
-
std::execution from the metal up - Paul Bendixen - Meeting C++ 2022
I've not seen yet, but it's been a bit since I looked last, any evidence of being able to build a computation graph and "save" it to re-run on new inputs. Something like https://github.com/taskflow/taskflow
-
Proper abstraction for this?
It seems you're describing something a generic parallel task framework. Check taskflow for a production ready example https://github.com/taskflow/taskflow/blob/master/
-
That one technology, question, or skill you never learned, and now you are haunted by during every new job conversation...
- https://github.com/taskflow/taskflow (I recommend to learn it first since its API and documentation are excellent)
-
Parallel Computations in C++: Where Do I Begin?
If you want some sort of "job" system, where you submit items to a some sort of queue to be processed in parallel, try searching for a thread pool - there isn't one in the standard library, but there's about a million implementations online. There are more complicated versions of that idea, that describe computation as a directed acyclic graph, such as taskflow.
-
High level overview of my custom game engine
The tooling decisions affect engine design though. For example if you want to have visual representation of job graph as it happened in specific frame of interest you need to pass the information around about job relationships and output it to a tool of choice. For example see https://github.com/taskflow/taskflow
-
Is there any good reason not to build an open-source C++ project on Intels oneTBB?
I am aware of DAGs of task based threading library like Taskflow and HPX however the benefit they have is not obvious to me, as the following sequential section depends on the parallel part being completed fully. If you want to suggest elaboration on the benefits of this approach would be welcome.
What are some alternatives?
Boost.Compute - A C++ GPU Computing Library for OpenCL
tbb - oneAPI Threading Building Blocks (oneTBB) [Moved to: https://github.com/oneapi-src/oneTBB]
Thrust - [ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl
tensorflow - An Open Source Machine Learning Framework for Everyone
moodycamel - A fast multi-producer, multi-consumer lock-free concurrent queue for C++11
HPX - The C++ Standard Library for Parallelism and Concurrency
junction - Concurrent data structures in C++
C++ Actor Framework - An Open Source Implementation of the Actor Model in C++
entt - Gaming meets modern C++ - a fast and reliable entity component system (ECS) and much more
ArrayFire - ArrayFire: a general purpose GPU library.
libunifex - Unified Executors