mpl VS samsara

Compare mpl vs samsara and see what are their differences.

mpl

The MaPLe compiler for efficient and scalable parallel functional programming (by MPLLang)

samsara

a reference-counting cycle collection library in rust (by chc4)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
mpl samsara
7 6
287 64
15.0% -
8.4 10.0
about 2 months ago over 1 year ago
Standard ML Rust
GNU General Public License v3.0 or later -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

mpl

Posts with mentions or reviews of mpl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-30.
  • Garbage Collection for Systems Programmers
    7 projects | news.ycombinator.com | 30 Mar 2024
    I'm one of the authors of this work -- I can explain a little.

    "Provably efficient" means that the language provides worst-case performance guarantees.

    For example in the "Automatic Parallelism Management" paper (https://dl.acm.org/doi/10.1145/3632880), we develop a compiler and run-time system that can execute extremely fine-grained parallel code without losing performance. (Concretely, imagine tiny tasks of around only 10-100 instructions each.)

    The key idea is to make sure that any task which is *too tiny* is executed sequentially instead of in parallel. To make this happen, we use a scheduler that runs in the background during execution. It is the scheduler's job to decide on-the-fly which tasks should be sequentialized and which tasks should be "promoted" into actual threads that can run in parallel. Intuitively, each promotion incurs a cost, but also exposes parallelism.

    In the paper, we present our scheduler and prove a worst-case performance bound. We specifically show that the total overhead of promotion will be at most a small constant factor (e.g., 1% overhead), and also that the theoretical amount of parallelism is unaffected, asymptotically.

    All of this is implemented in MaPLe (https://github.com/mpllang/mpl) and you can go play with it now!

  • MPL: Automatic Management of Parallelism
    1 project | news.ycombinator.com | 28 Mar 2024
  • Good languages for writing compilers in?
    8 projects | /r/ProgrammingLanguages | 11 May 2023
    Maple is a fork of MLton: https://github.com/MPLLang/mpl
  • Comparing Objective Caml and Standard ML
    5 projects | news.ycombinator.com | 15 Feb 2023
    Some of us are still using SML for research and teaching, e.g. https://github.com/mpllang/mpl
  • MaPLe Compiler for Parallel ML v0.3 Release Notes
    1 project | news.ycombinator.com | 26 Jun 2022
  • MPL-v0.3 Release Notes
    1 project | /r/sml | 26 Jun 2022

samsara

Posts with mentions or reviews of samsara. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-30.
  • Garbage Collection for Systems Programmers
    7 projects | news.ycombinator.com | 30 Mar 2024
    > IME it's the other way around, per-object individual lifetimes is a rare special case

    It depends on your application domain. But in most cases where objects have "individual lifetimes" you can still use reference counting, which has lower latency and memory overhead than tracing GC and interacts well with manual memory management. Tracing GC can then be "plugged in" for very specific cases, preferably using a high performance concurrent implementation much like https://github.com/chc4/samsara (for Rust) or https://github.com/pebal/sgcl (for C++).

  • Why choose async/await over threads?
    11 projects | news.ycombinator.com | 25 Mar 2024
    > Just for example: "it needs a GC" could be the heart of such an argument

    Rust can actually support high-performance concurrent GC, see https://github.com/chc4/samsara for an experimental implementation. But unlike other languages it gives you the option of not using it.

  • Boehm Garbage Collector
    9 projects | news.ycombinator.com | 21 Jan 2024
    The compiler support you need is quite limited. Here's an implementation of cycle collection in Rust: https://github.com/chc4/samsara It's made possible because Rust can tell apart read-only and read-write references (except for interior mutable objects, but these are known to the compiler and references to them can be treated as read-write). This avoids a global stop-the-world for the entire program.

    Cascading deletes are rare in practice, and if anything they are inherent to deterministic deletion, which is often a desirable property. When they're possible, one can often use arena allocation to avoid the issue altogether, since arenas are managed as a single object.

  • Steel – An embedded scheme interpreter in Rust
    13 projects | news.ycombinator.com | 3 Dec 2023
    There are concurrent GC implementations for Rust, e.g. Samsara https://redvice.org/2023/samsara-garbage-collector/ https://github.com/chc4/samsara that avoid blocking, except to a minimal extent in rare cases of contention. That fits pretty well with the pattern of "doing a bit of GC every frame".
  • Removing Garbage Collection from the Rust Language (2013)
    9 projects | news.ycombinator.com | 11 Sep 2023
    There are a number of efforts along these lines, the most interesting is probably Samsara https://github.com/chc4/samsara https://redvice.org/2023/samsara-garbage-collector/ which implements a concurrent, thread-safe GC with no global "stop the world" phase.
  • I built a garbage collector for a language that doesn't need one
    3 projects | news.ycombinator.com | 14 Aug 2023
    Nice blog post! I also wrote a concurrent reference counted cycle collector in Rust (https://github.com/chc4/samsara) though never published it to crates.io. It's neat to see the different choices that people made implementing similar goals, and dumpster works pretty differently from how I did it. I hit the same problems wrt concurrent mutation of the graph when trying to count in-degree of nodes, or adding references during a collection - I didn't even think of doing generational references and just have a RwLock...

What are some alternatives?

When comparing mpl and samsara you can also consider the following projects:

cakeml - CakeML: A Verified Implementation of ML

sundial-gc - WIP: my Tweag open source fellowship project

LunarML - The Standard ML compiler that produces Lua/JavaScript

nitro - Experimental OOP language that compiled to native code with non-fragile and stable ABI

HPCInfo - Information about many aspects of high-performance computing. Wiki content moved to ~/docs.

gara

mlton - The MLton repository

patty - A pattern matching library for Nim

1ml - 1ML prototype interpreter

node-libnmap - API to access nmap from node.js

ppci - A compiler for ARM, X86, MSP430, xtensa and more implemented in pure Python

qcell - Statically-checked alternatives to RefCell and RwLock