libgrape-lite
mpl
Our great sponsors
libgrape-lite | mpl | |
---|---|---|
3 | 7 | |
365 | 285 | |
1.1% | 16.8% | |
6.3 | 8.4 | |
27 days ago | about 2 months ago | |
C++ | Standard ML | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
libgrape-lite
-
libgrape-lite VS CXXGraph - a user suggested alternative
2 projects | 17 Mar 2022
-
GraphScope: A One-Stop Large-Scale Graph Computing System
We don't have a benchmark between the analytical engine in GraphScope (aka. GAE) with GraphX/Giraph. But we do have evaluated the performance of the underlying engine of GAE (libgrape-lite) with LDBC Graph Analytics Benchmark and it achieves higher performance comparably to the state-of-the-art systems [2].
[1]: https://github.com/alibaba/libgrape-lite
[2]: https://github.com/alibaba/libgrape-lite/blob/master/Perform...
mpl
-
Garbage Collection for Systems Programmers
I'm one of the authors of this work -- I can explain a little.
"Provably efficient" means that the language provides worst-case performance guarantees.
For example in the "Automatic Parallelism Management" paper (https://dl.acm.org/doi/10.1145/3632880), we develop a compiler and run-time system that can execute extremely fine-grained parallel code without losing performance. (Concretely, imagine tiny tasks of around only 10-100 instructions each.)
The key idea is to make sure that any task which is *too tiny* is executed sequentially instead of in parallel. To make this happen, we use a scheduler that runs in the background during execution. It is the scheduler's job to decide on-the-fly which tasks should be sequentialized and which tasks should be "promoted" into actual threads that can run in parallel. Intuitively, each promotion incurs a cost, but also exposes parallelism.
In the paper, we present our scheduler and prove a worst-case performance bound. We specifically show that the total overhead of promotion will be at most a small constant factor (e.g., 1% overhead), and also that the theoretical amount of parallelism is unaffected, asymptotically.
All of this is implemented in MaPLe (https://github.com/mpllang/mpl) and you can go play with it now!
- MPL: Automatic Management of Parallelism
-
Good languages for writing compilers in?
Maple is a fork of MLton: https://github.com/MPLLang/mpl
-
Comparing Objective Caml and Standard ML
Some of us are still using SML for research and teaching, e.g. https://github.com/mpllang/mpl
- MaPLe Compiler for Parallel ML v0.3 Release Notes
- MPL-v0.3 Release Notes
What are some alternatives?
QuickQanava - :link: C++17 network / graph visualization library - Qt6 / QML node editor.
cakeml - CakeML: A Verified Implementation of ML
GraphScope - 🔨 🍇 💻 🚀 GraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba | 一站式图计算系统
LunarML - The Standard ML compiler that produces Lua/JavaScript
CXXGraph - Header-Only C++ Library for Graph Representation and Algorithms
1ml - 1ML prototype interpreter
euler - A distributed graph deep learning framework.
HPCInfo - Information about many aspects of high-performance computing. Wiki content moved to ~/docs.
libvineyard - vineyard (v6d): an in-memory immutable data manager. [Moved to: https://github.com/alibaba/v6d]
ppci - A compiler for ARM, X86, MSP430, xtensa and more implemented in pure Python
mlton - The MLton repository