ocaml-multicore
bumpalo
ocaml-multicore | bumpalo | |
---|---|---|
8 | 16 | |
763 | 1,298 | |
0.0% | - | |
0.0 | 7.5 | |
over 1 year ago | 17 days ago | |
OCaml | Rust | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ocaml-multicore
-
PR to Merge Multicore OCaml
1. Domains are the unit of parallelism. A domain is essentially an OS thread with a bunch of extra runtime book-keeping data. You can use Domain.spawn (https://github.com/ocaml-multicore/ocaml-multicore/blob/5.00...) to spawn off a new domain which will run the supplied function and terminate when it finishes. This is heavyweight though, domains are expected to be long-running.
2. Domainslib is the library developed alongside multicore to aid users in exploiting parallelism. It supports nested parallelism and is pretty highly optimised (https://github.com/ocaml-multicore/domainslib/pull/29 for some graphs/numbers). The domainslib repo has some good examples: https://github.com/ocaml-multicore/domainslib/tree/master/te...
3. We've not tested against other forms of parallelism. There isn't anything stopping you exploiting SIMD in addition to parallelism from domains.
4. No, we've not compared performance by OS.
5. No plans for the multicore team to look at accelerator integration at the moment.
-
Will rust ever have a futures executor in std?
For Algebraic Effects and Multicore OCaml specifically, I have this intro saved and they've been publishing regular updates here's October's. They have a paper linked from their repo's README but I don't remember the contents offhand.
-
Graydon Hoare: What's next for language design? (2017)
Until recently Multicore OCaml was focused on deep handlers. The people working on the formalization of effects (either for program proofs or typed effects) were quite keen to have shallow handler integrated however. Thus, the effect module of the OCaml 5 preview contains both (see https://github.com/ocaml-multicore/ocaml-multicore/blob/5.00...) since September. I fear that non-academic literature has not followed this change (on the academic side, see https://dl.acm.org/doi/10.1145/3434314 for a program proofs point of view).
-
Multicore OCaml: September 2021, effect handlers will be in OCaml 5.0
Yes, it's announcing that the next but one version, 5.0, will support multicore and effect handlers.
For what it's worth you can actually start using Multicore OCaml today, there are installation instructions on the wiki: https://github.com/ocaml-multicore/ocaml-multicore
-
Aren't green threads just better than async/await?
ocaml-multicore/ocaml-multicore
-
Multicore OCaml: April 2021
Could you explain (in simple terms if possible) how the Multicore OCaml achieves a memory model which is much simpler on more efficient than in Java or C (mentioned at https://github.com/ocaml-multicore/ocaml-multicore/wiki)?
Didn't see any mentions of critical sections (mutexes) with C++ examples in the documentation ("Bounding Data Races in Space and Time"). I'm not sure I understand the comparisons the writers are presenting.
-
Multicore OCaml: Dec 2020 / Jan 2021
There are getting started instructions up on https://github.com/ocaml-multicore/ocaml-multicore
bumpalo
-
Rust vs Zig Benchmarks
Long story short, heap allocation is painfully slow. Any sort of malloc will always be slower than a custom pool or a bump allocator, because it has a lot more context to deal with.
Rust makes it especially hard to use custom allocators, see bumpalo for example [0]. To be fair, progress is being made in this area [1].
Theoretically one can use a "handle table" as a replacement for pools, you can find relevant discussion at [2].
[0] https://github.com/fitzgen/bumpalo
-
Rust Memory Management
There are ways to accomplish this as well. Different allocator libraries exist for this kind of scenario, namely bumpallo which allocates a larger block of memory from the kernel, and allocates quickly thereafter. That would amortize the cost of memory allocations in the way I think you're after?
- Custom allocators in Rust
-
A C Programmers take on Rust.
Meaning, storing a lot of things in the same block of allocated memory? Vec is a thing, you know. There's also a bump allocator library.
-
Hypothetical scenario - What would be better - C, C++ or Rust? (Read desc.)
There are data structures like slotmap, and relatively low-level crates like bumpalo. This is not to say that either fits your use case, just that you definitely have access to the necessary parts to fit what you describe.
-
Implementing "Drop" manually to show progress
Sometimes you can put everything in a bump allocator, then when you're done, free the entire bump allocator in one go. https://docs.rs/bumpalo/
-
Any languages doing anything interesting with allocators?
This is useful with crates like bumpalo which give you bump-allocation arenas whose lifetimes are tied to the objects they allocate.
-
I’m Porting the TypeScript Type Checker Tsc to Go
TSC doesn't need to "stick around", right? Just a run-once and the program is over?
In those cases, https://github.com/fitzgen/bumpalo works amazingly as an arena. You can pretty much forget about reference counting and have direct references everywhere in your graph. The disadvantage is that it's hard to modify your tree without leaving memory around.
We use it extensively in http://github.com/dioxusLabs/dioxus and don't need to worry about Rc anywhere in the graph/diffing code.
-
Allocating many Boxes at once
Probably bumpalo, but then its Box will have a lifetime parameter - bumpalo::boxed::Box<'a, dyn MyTrait>
-
Graydon Hoare: What's next for language design? (2017)
Strictly speaking, Rust doesn't need this as a built-in language feature, because its design allows it to be implemented as a third-party library: https://docs.rs/bumpalo
The biggest problem is that there's some awkwardness around RAII; I'm not sure whether that could have been avoided with a different approach.
Of course, ideally you'd want it to be compatible with the standard-library APIs that allocate. This is implemented, but is not yet at the point where they're sure they won't want to make backwards-incompatible changes to it, so you can only use it on nightly. https://doc.rust-lang.org/stable/std/alloc/trait.Allocator.h...
Or are you suggesting that the choice of allocator should be dynamically scoped, so that allocations that occur while the bump allocator is alive automatically use it even if they're in code that doesn't know about it? I think it's not possible for that to be memory-safe; all allocations using the bump allocator need to know about its lifetime, so that they can be sure not to outlive it, which would cause use-after-free bugs. I'm assuming that Odin just makes the programmer responsible for this, and if they get it wrong then memory corruption might occur; for a memory-safe language like Rust, that's not acceptable.
What are some alternatives?
eioio - Effects-based direct-style IO for multicore OCaml
rust-phf - Compile time static maps for Rust
domainslib - Parallel Programming over Domains
generational-arena - A safe arena allocator that allows deletion without suffering from the ABA problem by using generational indices.
roast - 🦋 Raku test suite
hashbrown - Rust port of Google's SwissTable hash map
enso - Hybrid visual and textual functional programming.
moonfire-nvr - Moonfire NVR, a security camera network video recorder
loom - Concurrency permutation testing tool for Rust.
feel
salsa - A generic framework for on-demand, incrementalized computation. Inspired by adapton, glimmer, and rustc's query system.
grenad - Tools to sort, merge, write, and read immutable key-value pairs :tomato: