cargo-call-stack
samsara
cargo-call-stack | samsara | |
---|---|---|
5 | 6 | |
555 | 64 | |
- | - | |
0.0 | 10.0 | |
2 months ago | over 1 year ago | |
Rust | Rust | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cargo-call-stack
-
Why choose async/await over threads?
Yes, it's what I wrote about in the last paragraph. If you can compute maximum stack size of a function, then you can avoid dynamic allocation with fibers as well. You are right that such implementations do not exist in right now, but I think it's technically possible as demonstrated by tools such as https://github.com/japaric/cargo-call-stack The main stumbling block here is FFI, historically shared libraries do not have any annotations about stack usage, so functions with bounded stack usage would not be able to use even libc.
-
Ask not what the compiler can do for you
For rust code, I have found https://github.com/japaric/cargo-call-stack to be the best available option, as it does take advantage of how Rust types are implemented in LLVM-IR to handle function pointers / dynamic dispatch a little better. An even better solution would try to use MIR type information as well to further narrow down targets of dynamic calls in a Rust-specific way, but no such tool exists that I know of.
-
Debugging and profiling embedded applications.
cargo-call-stack Static stack analysis!
-
In defense of complicated programming languages
Generators can just dump stuff on the stack. They have additional their own stack for storing their state. If you can prove an upper amount of creation of generators in the call graph, that would however work. There is for example this nice tool for Rust doing the overapproximation.
-
Understanding thread stack sizes and how alpine is different
Not easy at all.
I know that in the small-embedded world, people do work on such things.
Eg https://github.com/japaric/cargo-call-stack
samsara
-
Garbage Collection for Systems Programmers
> IME it's the other way around, per-object individual lifetimes is a rare special case
It depends on your application domain. But in most cases where objects have "individual lifetimes" you can still use reference counting, which has lower latency and memory overhead than tracing GC and interacts well with manual memory management. Tracing GC can then be "plugged in" for very specific cases, preferably using a high performance concurrent implementation much like https://github.com/chc4/samsara (for Rust) or https://github.com/pebal/sgcl (for C++).
-
Why choose async/await over threads?
> Just for example: "it needs a GC" could be the heart of such an argument
Rust can actually support high-performance concurrent GC, see https://github.com/chc4/samsara for an experimental implementation. But unlike other languages it gives you the option of not using it.
-
Boehm Garbage Collector
The compiler support you need is quite limited. Here's an implementation of cycle collection in Rust: https://github.com/chc4/samsara It's made possible because Rust can tell apart read-only and read-write references (except for interior mutable objects, but these are known to the compiler and references to them can be treated as read-write). This avoids a global stop-the-world for the entire program.
Cascading deletes are rare in practice, and if anything they are inherent to deterministic deletion, which is often a desirable property. When they're possible, one can often use arena allocation to avoid the issue altogether, since arenas are managed as a single object.
-
Steel – An embedded scheme interpreter in Rust
There are concurrent GC implementations for Rust, e.g. Samsara https://redvice.org/2023/samsara-garbage-collector/ https://github.com/chc4/samsara that avoid blocking, except to a minimal extent in rare cases of contention. That fits pretty well with the pattern of "doing a bit of GC every frame".
-
Removing Garbage Collection from the Rust Language (2013)
There are a number of efforts along these lines, the most interesting is probably Samsara https://github.com/chc4/samsara https://redvice.org/2023/samsara-garbage-collector/ which implements a concurrent, thread-safe GC with no global "stop the world" phase.
-
I built a garbage collector for a language that doesn't need one
Nice blog post! I also wrote a concurrent reference counted cycle collector in Rust (https://github.com/chc4/samsara) though never published it to crates.io. It's neat to see the different choices that people made implementing similar goals, and dumpster works pretty differently from how I did it. I hit the same problems wrt concurrent mutation of the graph when trying to count in-degree of nodes, or adding references during a collection - I didn't even think of doing generational references and just have a RwLock...
What are some alternatives?
hyperswitch - An open source payments switch written in Rust to make payments fast, reliable and affordable
sundial-gc - WIP: my Tweag open source fellowship project
itm - ARMv7-M ITM packet protocol decoder library crate and CLI tool.
nitro - Experimental OOP language that compiled to native code with non-fragile and stable ABI
gara
patty - A pattern matching library for Nim
node-libnmap - API to access nmap from node.js
qcell - Statically-checked alternatives to RefCell and RwLock
starlight - JS engine in Rust
gc-arena - Incremental garbage collection from safe Rust
helix - A post-modern modal text editor.
Nim - Nim is a statically typed compiled systems programming language. It combines successful concepts from mature languages like Python, Ada and Modula. Its design focuses on efficiency, expressiveness, and elegance (in that order of priority).