HPCInfo
mpl
HPCInfo | mpl | |
---|---|---|
1 | 7 | |
260 | 287 | |
- | 15.0% | |
8.6 | 8.4 | |
14 days ago | about 2 months ago | |
C | Standard ML | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
HPCInfo
-
Open source arm64 fortran?
I wrote a script to make it easy for people to install and try new Flang: https://github.com/jeffhammond/HPCInfo/blob/master/buildscripts/llvm-git.sh
mpl
-
Garbage Collection for Systems Programmers
I'm one of the authors of this work -- I can explain a little.
"Provably efficient" means that the language provides worst-case performance guarantees.
For example in the "Automatic Parallelism Management" paper (https://dl.acm.org/doi/10.1145/3632880), we develop a compiler and run-time system that can execute extremely fine-grained parallel code without losing performance. (Concretely, imagine tiny tasks of around only 10-100 instructions each.)
The key idea is to make sure that any task which is *too tiny* is executed sequentially instead of in parallel. To make this happen, we use a scheduler that runs in the background during execution. It is the scheduler's job to decide on-the-fly which tasks should be sequentialized and which tasks should be "promoted" into actual threads that can run in parallel. Intuitively, each promotion incurs a cost, but also exposes parallelism.
In the paper, we present our scheduler and prove a worst-case performance bound. We specifically show that the total overhead of promotion will be at most a small constant factor (e.g., 1% overhead), and also that the theoretical amount of parallelism is unaffected, asymptotically.
All of this is implemented in MaPLe (https://github.com/mpllang/mpl) and you can go play with it now!
- MPL: Automatic Management of Parallelism
-
Good languages for writing compilers in?
Maple is a fork of MLton: https://github.com/MPLLang/mpl
-
Comparing Objective Caml and Standard ML
Some of us are still using SML for research and teaching, e.g. https://github.com/mpllang/mpl
- MaPLe Compiler for Parallel ML v0.3 Release Notes
- MPL-v0.3 Release Notes
What are some alternatives?
h5cpp - C++17 templates between [stl::vector | armadillo | eigen3 | ublas | blitz++] and HDF5 datasets
cakeml - CakeML: A Verified Implementation of ML
libgrape-lite - 🍇 A C++ library for parallel graph processing (GRAPE) 🍇
LunarML - The Standard ML compiler that produces Lua/JavaScript
mpl - A C++17 message passing library based on MPI
mlton - The MLton repository
parallel-kd-tree - Parallel k-d tree with C++17, MPI and OpenMP
1ml - 1ML prototype interpreter
arbor - The Arbor multi-compartment neural network simulation library.
ppci - A compiler for ARM, X86, MSP430, xtensa and more implemented in pure Python
RaftLib - The RaftLib C++ library, streaming/dataflow concurrency via C++ iostream-like operators
install-mlkit - Action for installing MLKit