ppci
mpl
ppci | mpl | |
---|---|---|
5 | 7 | |
322 | 287 | |
- | 15.0% | |
0.0 | 8.4 | |
almost 2 years ago | about 2 months ago | |
Python | Standard ML | |
BSD 2-clause "Simplified" License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ppci
- Good languages for writing compilers in?
-
Hey guys, have any of you tried creating your own language using Python? I'm interested in giving it a shot and was wondering if anyone has any tips or resources to recommend. Thanks in advance!
It's not super maintained but you might enjoy building something with ppci, Pure Python Compiler Infrastructure. It has some front-ends and some back-ends. There's also PeachPy for an assembler. People like using Lark for parsing, I hear.
-
Hmm
I disagree
- PPCI (Pure Python Compiler Infrastructure) Project
- Windelbouwman/ppci: A compiler for ARM, x86, MSP430, xtensa in pure Python
mpl
-
Garbage Collection for Systems Programmers
I'm one of the authors of this work -- I can explain a little.
"Provably efficient" means that the language provides worst-case performance guarantees.
For example in the "Automatic Parallelism Management" paper (https://dl.acm.org/doi/10.1145/3632880), we develop a compiler and run-time system that can execute extremely fine-grained parallel code without losing performance. (Concretely, imagine tiny tasks of around only 10-100 instructions each.)
The key idea is to make sure that any task which is *too tiny* is executed sequentially instead of in parallel. To make this happen, we use a scheduler that runs in the background during execution. It is the scheduler's job to decide on-the-fly which tasks should be sequentialized and which tasks should be "promoted" into actual threads that can run in parallel. Intuitively, each promotion incurs a cost, but also exposes parallelism.
In the paper, we present our scheduler and prove a worst-case performance bound. We specifically show that the total overhead of promotion will be at most a small constant factor (e.g., 1% overhead), and also that the theoretical amount of parallelism is unaffected, asymptotically.
All of this is implemented in MaPLe (https://github.com/mpllang/mpl) and you can go play with it now!
- MPL: Automatic Management of Parallelism
-
Good languages for writing compilers in?
Maple is a fork of MLton: https://github.com/MPLLang/mpl
-
Comparing Objective Caml and Standard ML
Some of us are still using SML for research and teaching, e.g. https://github.com/mpllang/mpl
- MaPLe Compiler for Parallel ML v0.3 Release Notes
- MPL-v0.3 Release Notes
What are some alternatives?
Pegged - A Parsing Expression Grammar (PEG) module, using the D programming language.
cakeml - CakeML: A Verified Implementation of ML
backrooms - 3D, CISC Architecture and Esolang
LunarML - The Standard ML compiler that produces Lua/JavaScript
Assembler - outdated, do not use
HPCInfo - Information about many aspects of high-performance computing. Wiki content moved to ~/docs.
watim - Language which compiles to WebassemblyTextFormat
mlton - The MLton repository
manicdigger - Manic Digger - a multiplayer block-building voxel game, Minecraft clone
1ml - 1ML prototype interpreter
rust-numpy - PyO3-based Rust bindings of the NumPy C-API
install-mlkit - Action for installing MLKit