h5cpp
mpl
Our great sponsors
h5cpp | mpl | |
---|---|---|
2 | 7 | |
139 | 285 | |
- | 16.8% | |
0.0 | 8.4 | |
about 2 years ago | about 2 months ago | |
C++ | Standard ML | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
h5cpp
-
Could not run a simple Fortran program when trying to install the OpenMPI.
I had similar doubts using the h5cpp library (details see this GitHub issue). However, it seems the brew install openmpi can install OpenMPI on my mac Monterey.
mpl
-
Garbage Collection for Systems Programmers
I'm one of the authors of this work -- I can explain a little.
"Provably efficient" means that the language provides worst-case performance guarantees.
For example in the "Automatic Parallelism Management" paper (https://dl.acm.org/doi/10.1145/3632880), we develop a compiler and run-time system that can execute extremely fine-grained parallel code without losing performance. (Concretely, imagine tiny tasks of around only 10-100 instructions each.)
The key idea is to make sure that any task which is *too tiny* is executed sequentially instead of in parallel. To make this happen, we use a scheduler that runs in the background during execution. It is the scheduler's job to decide on-the-fly which tasks should be sequentialized and which tasks should be "promoted" into actual threads that can run in parallel. Intuitively, each promotion incurs a cost, but also exposes parallelism.
In the paper, we present our scheduler and prove a worst-case performance bound. We specifically show that the total overhead of promotion will be at most a small constant factor (e.g., 1% overhead), and also that the theoretical amount of parallelism is unaffected, asymptotically.
All of this is implemented in MaPLe (https://github.com/mpllang/mpl) and you can go play with it now!
- MPL: Automatic Management of Parallelism
-
Good languages for writing compilers in?
Maple is a fork of MLton: https://github.com/MPLLang/mpl
-
Comparing Objective Caml and Standard ML
Some of us are still using SML for research and teaching, e.g. https://github.com/mpllang/mpl
- MaPLe Compiler for Parallel ML v0.3 Release Notes
- MPL-v0.3 Release Notes
What are some alternatives?
dmtcp - DMTCP: Distributed MultiThreaded CheckPointing
cakeml - CakeML: A Verified Implementation of ML
h5pp - A C++17 interface for HDF5
LunarML - The Standard ML compiler that produces Lua/JavaScript
mpl - A C++17 message passing library based on MPI
HPCInfo - Information about many aspects of high-performance computing. Wiki content moved to ~/docs.
1ml - 1ML prototype interpreter
R-sharp - R# language is a kind of R liked vectorized language implements on .NET environment for the bioinformatics data analysis
ppci - A compiler for ARM, X86, MSP430, xtensa and more implemented in pure Python
gdl - GDL - GNU Data Language
mlton - The MLton repository