HVM
rust-gpu
Our great sponsors
HVM | rust-gpu | |
---|---|---|
107 | 82 | |
7,052 | 6,930 | |
1.8% | 1.7% | |
6.7 | 8.2 | |
about 2 months ago | 1 day ago | |
Rust | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
HVM
-
SaberVM
Reminds me of HVM[0]
[0]https://github.com/HigherOrderCO/HVM
Really interesting to see how new lang concepts and refinements keep popping up this last decade, between Vale, Gleam, Hylo, Austral...
Linear types really opened up lots of ways to improve memory management and compilation improvements.
- GPU Survival Toolkit for the AI age: The bare minimum every developer must know
-
A new F# compiler feature: graph-based type-checking
I have a tangential question that is related to this cool new feature.
Warning: the question I ask comes from a part of my brain that is currently melted due to heavy thinking.
Context: I write a fair amount of Clojure, and in Lisps the code itself is a tree. Just like this F# parallel graph type-checker. In Lisps, one would use Macros to perform compile-time computation to accomplish something like this, I think.
More context: Idris2 allows for first class type-driven development, where the types are passed around and used to formally specify program behavior, even down to the value of a particular definition.
Given that this F# feature enables parallel analysis, wouldn't it make sense to do all of our development in a Lisp-like Trie structure where the types are simply part of the program itself, like in Idris2?
Also related, is this similar to how HVM works with their "Interaction nets"?
https://github.com/HigherOrderCO/HVM
I'm afraid I don't even understand what the difference between code, data, and types are anymore... it used to make sense, but these new languages have dissolved those boundaries in my mind, and I am not sure how to build it back up again.
-
A History of Functional Hardware
Impressive presentation but I find two things missing in particular:
* GRIN [1] - arguably a breakthrough in FP compilation; there are several implementation based on this
* HVM [2] - parallel optimal reduction. The results are very impressive.
[1] https://link.springer.com/chapter/10.1007/3-540-63237-9_19
-
Is the abstraction of lazy-functional-purity doomed to leak?
Purity has nothing to do with memoization. Haskell's semantics never "rewrite under a lambda" (unlike, e.g. HVM). Calling (\_ -> e) () twice will (modulo optimizations) always perform the computation in e twice.
-
Can one use lambda calculus as an IR?
The most recent exploration of this, that I'm aware of is HVM (another intermediate language / runtime), although this one is not actually based on the lambda calculus, but on the interaction calculus.
-
The Rust I Wanted Had No Future
Then, actually unrelated but worth mentioning: HVM. Finally, something new on the functional front that isn't dependent types!
- The Halting Problem Is Decidable on a Set of Asymptotic Probability One (2006)
-
Bachelor Thesis Topic
If you are into functional PL, how about https://github.com/HigherOrderCO/HVM? You could experiment if you could schedule that on a GPU?
-
For those of you self taught,how did you cope with distractions while using a computer ?
In the interest of seeking ways of optimizing my code, I stumbled upon http://www.rntz.net/datafun/ as a means to do incremental computations of fixpoints while avoiding redundant work. And also the idea of automatic parallelism achieved by using Interaction Nets as a model of computation https://github.com/HigherOrderCO/HVM.
rust-gpu
-
Vcc – The Vulkan Clang Compiler
Sounds cool, but this requires yet another language to learn[0]. As someone who only has limited knowledge in this space, could someone tell me how comparable is the compute functionality of rust-gpu[1], where I can just write rust?
-
Candle: Torch Replacement in Rust
I don't do anything related to data science, but I feel like doing it in Rust would be nice.
You get operator overloading, so you can have ergonomic matrix operations that are typed also. Processing data on the CPU is fast, and crates like https://github.com/EmbarkStudios/rust-gpu make it very ergonomic to leverage the GPU.
I like this library for creating typed coordinate spaces for graphics programming (https://github.com/servo/euclid), I imagine something similar could be done to create refined types for matrices so you don't do matrix multiplication matrices of invalid sizes
-
What's the coolest Rust project you've seen that made you go, 'Wow, I didn't know Rust could do that!'?
Do you mean rust-gpu?
-
How a Nerdsnipe Led to a Fast Implementation of Game of Life
And https://github.com/EmbarkStudios/rust-gpu/tree/main/examples with the wgpu runner (here it runs the compute shader)
-
What is Rust's potential in game development?
I don't know how major they are considered, but Embark Studios is doing quite a bit of Rust in the open source space, most notably (IMO) rust-gpu and kajiya
-
[rust-gpu] How do I run/build my own shaders locally?
The examples in the rust-gpu repository are a good place to start
-
Posh: Type-Safe Graphics Programming in Rust
There's another project that's similar that's being used by an actual game company: https://github.com/EmbarkStudios/rust-gpu
They see specific advantages here that would outweigh that negative. It's not my space (I play games, but know next to nothing about graphics programming), but there's at least one argument in the other direction.
-
Introducing posh: Type-Safe Graphics Programming in Rust
Could this approach work for compute shaders (GPGPU) as well? So far, I think https://github.com/EmbarkStudios/rust-gpu is the state of the art in that area, but it adds a specific Rust compiler backend for generating SPIR-V rather than leaving that up to the driver. That seems more complicated than it needs to be... but maybe it has advantages too? Thoughts?
-
Looking for high level GPU computing crate
https://github.com/embarkstudios/rust-gpu Allows you to create shaders (kernals) in Rust.
-
With what languages are video games like League of Legends (most likely) programmed?
Also Embark Studios (formers DICE people) is doing a lot of work with Rust, all open source like Rust GPU https://github.com/EmbarkStudios/rust-gpu
What are some alternatives?
Kind - A next-gen functional language [Moved to: https://github.com/Kindelia/Kind2]
llama.cpp - LLM inference in C/C++
SICL - A fresh implementation of Common Lisp
wgpu - Cross-platform, safe, pure-rust graphics api.
fslang-suggestions - The place to make suggestions, discuss and vote on F# language and core library features
Rust-CUDA - Ecosystem of libraries and tools for writing and executing fast GPU code fully in Rust.
Sharp-Bilinear-Shaders - sharp bilinear shaders for RetroPie, Recalbox and Libretro for sharp pixels without pixel wobble and minimal blurring
onnxruntime-rs - Rust wrapper for Microsoft's ONNX Runtime (version 1.8)
atom - A DSL for embedded hard realtime applications.
kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
Vale - Compiler for the Vale programming language - http://vale.dev/
DiligentEngine - A modern cross-platform low-level graphics library and rendering framework