emacs-request
Thrust
DISCONTINUED
Our great sponsors
emacs-request | Thrust | |
---|---|---|
10 | 4 | |
604 | 4,839 | |
- | - | |
0.0 | 6.9 | |
about 1 year ago | about 2 months ago | |
Emacs Lisp | C++ | |
GNU General Public License v3.0 only | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
emacs-request
-
Lsp-Bridge, Not Even Wrong
That is quite normal thing to do. Have you not seen Emacs Async? Take, a look, it is a useful thing. Or Emacs Request. Since Emacs does not have proper thread scheduler, that is the best next thing you can do.
-
[ANN] alphapapa/plz.el: v0.3 release (HTTP library for Emacs)
Exciting! I've been using request.el for my own projects mostly out of habit. Could you outline some of the relative advantages of plz?
-
Upload region to 0x0.st
Instead of shelling out to curl, use url.el or request.el.
-
A vision of a multi-threaded Emacs
You mean, John Wigley's async package? Maybe it isn't used so often, however async processes are used in Emacs. Check for example functions 'native-async-compile' or 'async-byte-compile-file'. There is another package, request.el that uses async processes to do the network I/O (via curl).
-
Tired of leaving emacs to calculate your primer melting temperatures?? tmcalculator.el can help!
This? https://github.com/tkf/emacs-request
-
plz.el: An HTTP library for Emacs, using curl as a backend
How does this compare to something like request.el? https://github.com/tkf/emacs-request
You can already use emacs-request, it offers very nice asynchronous API and uses curl by default if present and falls back on Emacs url if curl is not found.
- using Emacs org-mode as rest client replacement
Thrust
-
AMD's CDNA 3 Compute Architecture
this is frankly starting to sound a lot like the ridiculous "blue bubbles" discourse.
AMD's products have generally failed to catch traction because their implementations are halfassed and buggy and incomplete (despite promising more features, these are often paper features or career-oriented development from now-departed developers). all of the same "developer B" stuff from openGL really applies to openCL as well.
http://richg42.blogspot.com/2014/05/the-truth-on-opengl-driv...
AMD has left a trail of abandoned code and disappointed developers in their wake. These two repos are the same thing for AMD's ecosystem and NVIDIA's ecosystem, how do you think the support story compares?
https://github.com/HSA-Libraries/Bolt
https://github.com/NVIDIA/thrust
in the last few years they have (once again) dumped everything and started over, ROCm supported essentially no consumer cards and rotated support rapidly even in the CDNA world. It offers no binary compatibility support story, it has to be compiled for specific chips within a generation, not even just "RDNA3" but "Navi 31 specifically". Etc etc. And nobody with consumer cards could access it until like, six months ago, and that still is only on windows, consumer cards are not even supported on linux (!).
https://geohot.github.io/blog/jekyll/update/2023/06/07/a-div...
This is on top of the actual problems that still remain, as geohot found out. Installing ROCm is a several-hour process that will involve debugging the platform just to get it to install, and then you will probably find that the actual code demos segfault when you run them.
AMD's development processes are not really open, and actual development is silo'd inside the company with quarterly code dumps outside. The current code is not guaranteed to run on the actual driver itself, they do not test it even in the supported configurations.
it hasn't got traction because it's a low-quality product and nobody can even access it and run it anyway.
-
Parallel Computations in C++: Where Do I Begin?
For a higher level GPU interface, Thrust provides "standard library"-like functions that run in parallel on the GPU (Nvidia only)
-
What are some cool modern libraries you enjoy using?
For GPGPU, I like thrust. C++-idiomatic way of writing CUDA code, passing between host and device, etc.
-
A vision of a multi-threaded Emacs
Users should work with higher level primitives like tasks, parallel loops, asynchronous functions etc. Think TBB, Thrust, Taskflow, lparallel for CL, etc.
What are some alternatives?
CUB - THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.
ArrayFire - ArrayFire: a general purpose GPU library.
Boost.Compute - A C++ GPU Computing Library for OpenCL
HPX - The C++ Standard Library for Parallelism and Concurrency
moodycamel - A fast multi-producer, multi-consumer lock-free concurrent queue for C++11
Taskflow - A General-purpose Parallel and Heterogeneous Task Programming System
moderngpu - Patterns and behaviors for GPU computing
plz.el - An HTTP library for Emacs
NCCL - Optimized primitives for collective multi-GPU communication
libcds - A C++ library of Concurrent Data Structures
libcudacxx - [ARCHIVED] The C++ Standard Library for your entire system. See https://github.com/NVIDIA/cccl
VexCL - VexCL is a C++ vector expression template library for OpenCL/CUDA/OpenMP