Cgml
rust-gpu
Cgml | rust-gpu | |
---|---|---|
22 | 82 | |
39 | 6,972 | |
- | 1.1% | |
8.6 | 7.7 | |
4 months ago | 9 days ago | |
C++ | Rust | |
GNU Lesser General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Cgml
-
Asynchronous Programming in C#
> Meant no offense
None taken.
> computervison project in c#
Yeah, for CV applications nuget.org is indeed not particularly great. Very few people are using C# for these things, people typically choose something else like Python and OpenCV.
BTW, same applies to ML libraries, most folks are using Python/Torch/CUDA stack. For that hobby project https://github.com/Const-me/Cgml/ I had to re-implement the entire tech stack in C#/C++/HLSL.
-
Groq CEO: 'We No Longer Sell Hardware'
> If there is a future with this idea, its gotta be just shipping the LLM with game right?
That might be a nice application for this library of mine: https://github.com/Const-me/Cgml/
Thatās an open source Mistral ML model implementation which runs on GPUs (all of them, not just nVidia), takes 4.5GB on disk, uses under 6GB of VRAM, and optimized for interactive single-user use case. Probably fast enough for that application.
You wouldnāt want in-game dialogues with the original model though. Game developers would need to finetune, retrain and/or do something else with these weights and/or my implementation.
-
Ask HN: How to get started with local language models?
If you just want to run Mistral on Windows, you could try my port: https://github.com/Const-me/Cgml/tree/master/Mistral/Mistral...
The setup is relatively easy: install .NET runtime, download 4.5 GB model file from BitTorrent, unpack a small ZIP file and run the EXE.
-
OpenAI postmortem ā Unexpected responses from ChatGPT
Speaking about random sampling during inference, most ML models are doing it rather inefficiently.
Hereās a better way: https://github.com/Const-me/Cgml/blob/master/Readme.md#rando...
My HLSL is easily portable to CUDA, which has `__syncthreads` and `atomicInc` intrinsics.
- Nvidia's Chat with RTX is a promising AI chatbot that runs locally on your PC
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I did a few times with Direct3D 11 compute shaders. Hereās an open-source example: https://github.com/Const-me/Cgml
Pretty sure Vulkan gonna work equally well, at the very least thereās an open source DXVK project which implements D3D11 on top of Vulkan.
-
Brave Leo now uses Mixtral 8x7B as default
Hereās an example of a custom 4 bits/weight codec for ML weights:
https://github.com/Const-me/Cgml/blob/master/Readme.md#bcml1...
llama.cpp does it slightly differently but still, AFAIK their quantized data formats are conceptually similar to my codec.
- Efficient LLM inference solution on Intel GPU
-
Vcc ā The Vulkan Clang Compiler
> the API was high-friction due to the shader language, and the glue between shader and CPU
Direct3D 11 compute shaders share these things with Vulkan, yet D3D11 is relatively easy to use. For example, see that library which implements ML-targeted compute shaders for C# with minimal friction: https://github.com/Const-me/Cgml The backend implemented in C++ is rather simple, just binds resources and dispatches these shaders.
I think the main usability issue with Vulkan is API design. Vulkan was only designed with AAA game engines in mind. The developers of these game engines have borderline unlimited budgets, and their requirements are very different from ordinary folks who want to leverage GPU hardware.
-
I made an app that runs Mistral 7B 0.2 LLM locally on iPhone Pros
Minor update https://github.com/Const-me/Cgml/releases/tag/1.1a Canāt edit that comment anymore, too late.
rust-gpu
-
Vcc ā The Vulkan Clang Compiler
Sounds cool, but this requires yet another language to learn[0]. As someone who only has limited knowledge in this space, could someone tell me how comparable is the compute functionality of rust-gpu[1], where I can just write rust?
[0] https://github.com/Hugobros3/shady#language-syntax
[1] https://github.com/EmbarkStudios/rust-gpu
-
Candle: Torch Replacement in Rust
I don't do anything related to data science, but I feel like doing it in Rust would be nice.
You get operator overloading, so you can have ergonomic matrix operations that are typed also. Processing data on the CPU is fast, and crates like https://github.com/EmbarkStudios/rust-gpu make it very ergonomic to leverage the GPU.
I like this library for creating typed coordinate spaces for graphics programming (https://github.com/servo/euclid), I imagine something similar could be done to create refined types for matrices so you don't do matrix multiplication matrices of invalid sizes
-
What's the coolest Rust project you've seen that made you go, 'Wow, I didn't know Rust could do that!'?
Do you mean rust-gpu?
-
How a Nerdsnipe Led to a Fast Implementation of Game of Life
And https://github.com/EmbarkStudios/rust-gpu/tree/main/examples with the wgpu runner (here it runs the compute shader)
-
What is Rust's potential in game development?
I don't know how major they are considered, but Embark Studios is doing quite a bit of Rust in the open source space, most notably (IMO) rust-gpu and kajiya
-
[rust-gpu] How do I run/build my own shaders locally?
The examples in the rust-gpu repository are a good place to start
-
Posh: Type-Safe Graphics Programming in Rust
There's another project that's similar that's being used by an actual game company: https://github.com/EmbarkStudios/rust-gpu
They see specific advantages here that would outweigh that negative. It's not my space (I play games, but know next to nothing about graphics programming), but there's at least one argument in the other direction.
-
Introducing posh: Type-Safe Graphics Programming in Rust
Could this approach work for compute shaders (GPGPU) as well? So far, I think https://github.com/EmbarkStudios/rust-gpu is the state of the art in that area, but it adds a specific Rust compiler backend for generating SPIR-V rather than leaving that up to the driver. That seems more complicated than it needs to be... but maybe it has advantages too? Thoughts?
-
Looking for high level GPU computing crate
https://github.com/embarkstudios/rust-gpu Allows you to create shaders (kernals) in Rust.
-
With what languages are video games like League of Legends (most likely) programmed?
Also Embark Studios (formers DICE people) is doing a lot of work with Rust, all open source like Rust GPU https://github.com/EmbarkStudios/rust-gpu
What are some alternatives?
PowerInfer - High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
llama.cpp - LLM inference in C/C++
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
wgpu - Cross-platform, safe, pure-rust graphics api.
mlx - MLX: An array framework for Apple silicon
Rust-CUDA - Ecosystem of libraries and tools for writing and executing fast GPU code fully in Rust.
EmotiVoice - EmotiVoice š: a Multi-Voice and Prompt-Controlled TTS Engine
onnxruntime-rs - Rust wrapper for Microsoft's ONNX Runtime (version 1.8)
llamafile - Distribute and run LLMs with a single file.
kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
clspv - Clspv is a compiler for OpenCL C to Vulkan compute shaders
DiligentEngine - A modern cross-platform low-level graphics library and rendering framework