RxCpp
Thrust
Our great sponsors
RxCpp | Thrust | |
---|---|---|
6 | 4 | |
2,972 | 4,839 | |
1.9% | - | |
0.0 | 6.9 | |
2 months ago | 3 months ago | |
C++ | C++ | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RxCpp
-
Why doesn't C++ use higher-order functions on iterators like Rust does?
And, prior to that https://github.com/ReactiveX/RxCpp
-
ReactivePlusPlus (reactive programming library for c++20) v0.0.1 is out with base operators (looking for feedback)
Yeah, I know this problem with operators =), original RxCpp implementation also has this problem with implementing all functions into base class and calling dependent functions internally.
-
RxCpp VS ReactivePlusPlus - a user suggested alternative
2 projects | 17 Apr 2022
-
What are some candidate libraries for inter-thread communication like message boxes or event systems?
Also you can check rxcpp with documentation about reactive approach there. It is functional version of observer/publisher-subscriber patterns with ability to be multithreaded. You can send events from one side, subscribe from another and modify events in meanwhile
-
Converting header-only libraries to modules?
I use a very large header-only library RxCpp. Simply adding #include "RxCpp/rx.hpp" to one .cpp file adds >1 second of compilation time. I'd like to use it as a module, but when I try to import "RxCpp/rx.hpp";, I get a bunch of errors.
-
Learning how to create applications with C++ for windows
+1 for Rx, specifically RxCpp. I've used it in concert with Qt with great results.
Thrust
-
AMD's CDNA 3 Compute Architecture
this is frankly starting to sound a lot like the ridiculous "blue bubbles" discourse.
AMD's products have generally failed to catch traction because their implementations are halfassed and buggy and incomplete (despite promising more features, these are often paper features or career-oriented development from now-departed developers). all of the same "developer B" stuff from openGL really applies to openCL as well.
http://richg42.blogspot.com/2014/05/the-truth-on-opengl-driv...
AMD has left a trail of abandoned code and disappointed developers in their wake. These two repos are the same thing for AMD's ecosystem and NVIDIA's ecosystem, how do you think the support story compares?
https://github.com/HSA-Libraries/Bolt
https://github.com/NVIDIA/thrust
in the last few years they have (once again) dumped everything and started over, ROCm supported essentially no consumer cards and rotated support rapidly even in the CDNA world. It offers no binary compatibility support story, it has to be compiled for specific chips within a generation, not even just "RDNA3" but "Navi 31 specifically". Etc etc. And nobody with consumer cards could access it until like, six months ago, and that still is only on windows, consumer cards are not even supported on linux (!).
https://geohot.github.io/blog/jekyll/update/2023/06/07/a-div...
This is on top of the actual problems that still remain, as geohot found out. Installing ROCm is a several-hour process that will involve debugging the platform just to get it to install, and then you will probably find that the actual code demos segfault when you run them.
AMD's development processes are not really open, and actual development is silo'd inside the company with quarterly code dumps outside. The current code is not guaranteed to run on the actual driver itself, they do not test it even in the supported configurations.
it hasn't got traction because it's a low-quality product and nobody can even access it and run it anyway.
-
Parallel Computations in C++: Where Do I Begin?
For a higher level GPU interface, Thrust provides "standard library"-like functions that run in parallel on the GPU (Nvidia only)
-
What are some cool modern libraries you enjoy using?
For GPGPU, I like thrust. C++-idiomatic way of writing CUDA code, passing between host and device, etc.
-
A vision of a multi-threaded Emacs
Users should work with higher level primitives like tasks, parallel loops, asynchronous functions etc. Think TBB, Thrust, Taskflow, lparallel for CL, etc.
What are some alternatives?
NumCpp - C++ implementation of the Python Numpy library
CUB - THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.
ReactivePlusPlus - Implementation of async observable/observer (Reactive Programming) in C++ with care about performance and templates in mind in ReactiveX approach
ArrayFire - ArrayFire: a general purpose GPU library.
etl - Embedded Template Library
Boost.Compute - A C++ GPU Computing Library for OpenCL
sobjectizer - An implementation of Actor, Publish-Subscribe, and CSP models in one rather small C++ framework. With performance, quality, and stability proved by years in the production.
HPX - The C++ Standard Library for Parallelism and Concurrency
Aeron - Efficient reliable UDP unicast, UDP multicast, and IPC message transport
moodycamel - A fast multi-producer, multi-consumer lock-free concurrent queue for C++11
benchmarks - Latency benchmarks for messaging
Taskflow - A General-purpose Parallel and Heterogeneous Task Programming System