Thrust
Cgml
Thrust | Cgml | |
---|---|---|
4 | 22 | |
4,839 | 38 | |
- | - | |
6.9 | 8.6 | |
3 months ago | 4 months ago | |
C++ | C++ | |
GNU General Public License v3.0 or later | GNU Lesser General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Thrust
-
AMD's CDNA 3 Compute Architecture
this is frankly starting to sound a lot like the ridiculous "blue bubbles" discourse.
AMD's products have generally failed to catch traction because their implementations are halfassed and buggy and incomplete (despite promising more features, these are often paper features or career-oriented development from now-departed developers). all of the same "developer B" stuff from openGL really applies to openCL as well.
http://richg42.blogspot.com/2014/05/the-truth-on-opengl-driv...
AMD has left a trail of abandoned code and disappointed developers in their wake. These two repos are the same thing for AMD's ecosystem and NVIDIA's ecosystem, how do you think the support story compares?
https://github.com/HSA-Libraries/Bolt
https://github.com/NVIDIA/thrust
in the last few years they have (once again) dumped everything and started over, ROCm supported essentially no consumer cards and rotated support rapidly even in the CDNA world. It offers no binary compatibility support story, it has to be compiled for specific chips within a generation, not even just "RDNA3" but "Navi 31 specifically". Etc etc. And nobody with consumer cards could access it until like, six months ago, and that still is only on windows, consumer cards are not even supported on linux (!).
https://geohot.github.io/blog/jekyll/update/2023/06/07/a-div...
This is on top of the actual problems that still remain, as geohot found out. Installing ROCm is a several-hour process that will involve debugging the platform just to get it to install, and then you will probably find that the actual code demos segfault when you run them.
AMD's development processes are not really open, and actual development is silo'd inside the company with quarterly code dumps outside. The current code is not guaranteed to run on the actual driver itself, they do not test it even in the supported configurations.
it hasn't got traction because it's a low-quality product and nobody can even access it and run it anyway.
-
Parallel Computations in C++: Where Do I Begin?
For a higher level GPU interface, Thrust provides "standard library"-like functions that run in parallel on the GPU (Nvidia only)
-
What are some cool modern libraries you enjoy using?
For GPGPU, I like thrust. C++-idiomatic way of writing CUDA code, passing between host and device, etc.
-
A vision of a multi-threaded Emacs
Users should work with higher level primitives like tasks, parallel loops, asynchronous functions etc. Think TBB, Thrust, Taskflow, lparallel for CL, etc.
Cgml
-
Asynchronous Programming in C#
> Meant no offense
None taken.
> computervison project in c#
Yeah, for CV applications nuget.org is indeed not particularly great. Very few people are using C# for these things, people typically choose something else like Python and OpenCV.
BTW, same applies to ML libraries, most folks are using Python/Torch/CUDA stack. For that hobby project https://github.com/Const-me/Cgml/ I had to re-implement the entire tech stack in C#/C++/HLSL.
-
Groq CEO: 'We No Longer Sell Hardware'
> If there is a future with this idea, its gotta be just shipping the LLM with game right?
That might be a nice application for this library of mine: https://github.com/Const-me/Cgml/
That’s an open source Mistral ML model implementation which runs on GPUs (all of them, not just nVidia), takes 4.5GB on disk, uses under 6GB of VRAM, and optimized for interactive single-user use case. Probably fast enough for that application.
You wouldn’t want in-game dialogues with the original model though. Game developers would need to finetune, retrain and/or do something else with these weights and/or my implementation.
-
Ask HN: How to get started with local language models?
If you just want to run Mistral on Windows, you could try my port: https://github.com/Const-me/Cgml/tree/master/Mistral/Mistral...
The setup is relatively easy: install .NET runtime, download 4.5 GB model file from BitTorrent, unpack a small ZIP file and run the EXE.
-
OpenAI postmortem – Unexpected responses from ChatGPT
Speaking about random sampling during inference, most ML models are doing it rather inefficiently.
Here’s a better way: https://github.com/Const-me/Cgml/blob/master/Readme.md#rando...
My HLSL is easily portable to CUDA, which has `__syncthreads` and `atomicInc` intrinsics.
- Nvidia's Chat with RTX is a promising AI chatbot that runs locally on your PC
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I did a few times with Direct3D 11 compute shaders. Here’s an open-source example: https://github.com/Const-me/Cgml
Pretty sure Vulkan gonna work equally well, at the very least there’s an open source DXVK project which implements D3D11 on top of Vulkan.
-
Brave Leo now uses Mixtral 8x7B as default
Here’s an example of a custom 4 bits/weight codec for ML weights:
https://github.com/Const-me/Cgml/blob/master/Readme.md#bcml1...
llama.cpp does it slightly differently but still, AFAIK their quantized data formats are conceptually similar to my codec.
- Efficient LLM inference solution on Intel GPU
-
Vcc – The Vulkan Clang Compiler
> the API was high-friction due to the shader language, and the glue between shader and CPU
Direct3D 11 compute shaders share these things with Vulkan, yet D3D11 is relatively easy to use. For example, see that library which implements ML-targeted compute shaders for C# with minimal friction: https://github.com/Const-me/Cgml The backend implemented in C++ is rather simple, just binds resources and dispatches these shaders.
I think the main usability issue with Vulkan is API design. Vulkan was only designed with AAA game engines in mind. The developers of these game engines have borderline unlimited budgets, and their requirements are very different from ordinary folks who want to leverage GPU hardware.
-
I made an app that runs Mistral 7B 0.2 LLM locally on iPhone Pros
Minor update https://github.com/Const-me/Cgml/releases/tag/1.1a Can’t edit that comment anymore, too late.
What are some alternatives?
CUB - THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.
PowerInfer - High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
ArrayFire - ArrayFire: a general purpose GPU library.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
Boost.Compute - A C++ GPU Computing Library for OpenCL
mlx - MLX: An array framework for Apple silicon
HPX - The C++ Standard Library for Parallelism and Concurrency
EmotiVoice - EmotiVoice 😊: a Multi-Voice and Prompt-Controlled TTS Engine
moodycamel - A fast multi-producer, multi-consumer lock-free concurrent queue for C++11
llamafile - Distribute and run LLMs with a single file.
Taskflow - A General-purpose Parallel and Heterogeneous Task Programming System
clspv - Clspv is a compiler for OpenCL C to Vulkan compute shaders