legion
nanobind
Our great sponsors
legion | nanobind | |
---|---|---|
11 | 11 | |
647 | 2,028 | |
2.2% | - | |
9.9 | 9.6 | |
16 days ago | 7 days ago | |
C++ | C++ | |
Apache License 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
legion
- Legion 24.03.0 – Control Replication
-
Antithesis of a One-in-a-Million Bug: Taming Demonic Nondeterminism
I work on a distributed runtime system for heterogeneous supercomputers [1].
As an example of the sort of bug we regularly deal with, I am at this exact moment tracking down a freeze that occurs on 8,192 nodes of a supercomputer [2]. That means I'm using about 64,000 GPUs and about half a million CPU cores. The smallest node count I've seen my issue is 2,048 nodes and at that scale it only happens about 10% of the time.
We've been debating internally whether Antithesis could help us or not. On the one hand, the fuzzing to explore the state space, and deterministic reproduction, are exactly what we want. On the other hand, we believe our state space is much larger than what you see in a typical distributed database. (And not just because of the sheer scale of things, but even on a single node we have state machines with order hundreds to thousands of states in them.) Based on the post here and the "scenario" count explored in CouchDB, I'm not convinced you'd be able to handle us. :-)
I'd be curious what you think. Happy to discuss here, or contact info in profile.
[1]: https://legion.stanford.edu/
[2]: https://www.olcf.ornl.gov/frontier/
-
Progress on No-GIL CPython
Parallelism in CS is a bit like security in CS. People know it matters in the abstract senses but you really only get into it if you look for the training specifically. We're getting better at both over time: just as more languages/libraries/etc. are secure by default, more now are parallel by default. There's a ways to go, but I'm glad we didn't do this prematurely, because the technology has improved a lot in the last decade. Look for example at what we can do (safely!) with Rayon in Rust vs (unsafely!) with OpenMP in C++.
And there are things even further afield like what I work on [1][2][3].
[1]: https://legion.stanford.edu/
[2]: https://regent-lang.org/
[3]: https://github.com/nv-legate/cunumeric
-
Mojo is now available on Mac
Chapel has at least several full-time developers at Cray/HPE and (I think) the US national labs, and has had some for almost two decades. That's much more than $100k.
Chapel is also just one of many other projects broadly interested in developing new programming languages for "high performance" programming. Out of that large field, Chapel is not especially related to the specific ideas or design goals of Mojo. Much more related are things like Codon (https://exaloop.io), and the metaprogramming models in Terra (https://terralang.org), Nim (https://nim-lang.org), and Zig (https://ziglang.org).
But Chapel is great! It has a lot of good ideas, especially for distributed-memory programming, which is its historical focus. It is more related to Legion (https://legion.stanford.edu, https://regent-lang.org), parallel & distributed Fortran, ZPL, etc.
-
Announcing Chapel 1.32
I should also note that there is Pygion if you want to use Python. Not a lot of great reference material right now, but there's the paper:
https://legion.stanford.edu/pdfs/pygion2019.pdf
And code samples:
https://github.com/StanfordLegion/legion/tree/stable/binding...
-
Is anyone using PyPy for real work?
We use PyPy for performing verification of our software stack [1], and also for profiling tools [2]. The verification tool is basically a complete reimplementation of our main product, and therefore encodes a massive amount of business logic (and therefore difficult to impossible to rewrite in another language). As with other users, we found the switch to PyPy was seamless and provides us with something like a 2.5x speedup out of the box, with (I think) higher speedups in some specific cases.
We eventually rewrote the profiler tool in Rust for additional speedups, but as mentioned for the verification engine, it's probably too complicated to ever do that so we really appreciate drop-in tools like PyPy that can speed up our code.
[1]: https://github.com/StanfordLegion/legion/blob/master/tools/l...
[2]: https://github.com/StanfordLegion/legion/blob/master/tools/l...
-
Make your programs run faster by better using the data cache (2020)
Legion is also doing something like that: https://legion.stanford.edu/
-
Is Parallel Programming Hard, and, If So, What Can You Do About It? [pdf]
If you really want to dig into it you can read up on the tutorials and/or papers from the Legion project: https://legion.stanford.edu/
But briefly, these task-based programs preserve sequential semantics. That means (whatever the system actually does when running your program), as long as you follow the rules, the parallelism should be invisible to the execution of the program.
-
Ask HN: Who is hiring? (September 2022)
Computer Science Research Dept., SLAC National Accelerator Laboratory | Research Scientist / Engineer | Menlo Park, CA or REMOTE, VISA | Full Time
We're a research group within SLAC, headed by Alex Aiken (https://theory.stanford.edu/~aiken/). We focus on fundamental CS research that has the potential to impact science, mainly in the areas of high-performance and distributed computing, programming languages, compilers, networks, operating systems, etc. One of our major projects is Legion, a forward-looking programming system for distributed computing (https://legion.stanford.edu/). Legion has been used to create new programming languages (https://regent-lang.org/), seamless distributed NumPy (https://developer.nvidia.com/cunumeric), and a drop-in replacement for Keras and PyTorch (https://flexflow.ai/), among many other things.
We are looking for strong scientists and engineers to join our group. For clarity (because these terms vary by industry/company), scientists mainly focus on producing research results (e.g., papers and research software) while engineers mainly focus on software development and deliverables (e.g., system or application implementation). For scientist positions please expect to provide a CV with relevant publications.
The official application links are below, but please feel free to contact me directly if you have questions. (My HN username @slac.stanford.edu)
Scientist (Computer Science):
https://erp-hprdext.erp.slac.stanford.edu/psp/hprdext/EMPLOY...
Engineer (Computer Science):
https://erp-hprdext.erp.slac.stanford.edu/psp/hprdext/EMPLOY...
We've had some reports that the application site doesn't work well in Google Chrome. You might want to apply in Firefox.
-
The Underwhelming Impact of Software Engineering Research (April 2022)
There are some points in the middle, but it's rare. I worked on one of these [1]. We've been building the system for just over ten years, and are starting to see some truly killer apps being built on top of it [2, 3].
While it has some great benefits once you arrive, the upfront costs are enormous. You basically need to find a funding source (or sources) that will pay for this product while you're building it. Also, in order for the research payoff to be worth it, you need both the product itself, and subsequent innovations it enables, to be research-worthy. Not all areas of research can support this. On top of it all, even when you do this, you'll still spend years of effort in activities that are essentially not research. You're basically responsible for all of your own customer support, sales, marketing, etc.---like a startup, but without the financial upside if you succeed. Yes there is recognition and so on, but the payoffs aren't as dramatic. Most people aren't ready to commit to this path.
Keep in mind that you can't build this in 5 years either. So a single generation of PhD students can't get it done. The only reason we were successful is because the key staff on the project stuck around for 5+ years after their PhDs because we all believed in doing the work.
Given all that, I don't hold it against people at all who just want to build prototypes and then move on to the next thing. It's way less risky and higher reward relative to the costs.
[1]: https://legion.stanford.edu/
[2]: https://flexflow.ai/
[3]: https://developer.nvidia.com/cunumeric
nanobind
-
Progress on No-GIL CPython
Take a look at https://github.com/wjakob/nanobind
> More concretely, benchmarks show up to ~4× faster compile time, ~5× smaller binaries, and ~10× lower runtime overheads compared to pybind11.
-
Advanced Python Mastery – A Course by David Beazley
People should not take that an endorsement of Swig.
Please use ctypes, cffi or https://github.com/wjakob/nanobind
Beazley himself is amazed that it (Swig) is still in use.
- Swig – Connect C/C++ programs with high-level programming languages
- Nanobind: Tiny and efficient C++/Python bindings
-
Create Python bindings for my C++ code with PyBind11
Nanobind made by the creator of PyBind11, it has a similar interface, but it takes leverage of C++17 and it aims to have more efficient bindings in space and speed.
- Nanobind – Seamless operability between C++17 and Python
-
Cython Is 20
I would recommend using NanoBind, the follow up of PyBind11 by the same author (Wensel Jakob), and move as much performance critical code to C or C++. https://github.com/wjakob/nanobind
If you really care about performance called from Python, consider something like NVIDIA Warp (Preview). Warp jits and runs your code on CUDA or CPU. Although Warp targets physics simulation, geometry processing, and procedural animation, it can be used for other tasks as well. https://github.com/NVIDIA/warp
Jax is another option, by Google, jitting and vectorizing code for TPU, GPU or CPU. https://github.com/google/jax
- GitHub - wjakob/nanobind: nanobind — Seamless operability between C++17 and Python
What are some alternatives?
pldb - PLDB: a Programming Language Database. A computable encyclopedia about programming languages.
pybind11 - Seamless operability between C++11 and Python
preshed - 💥 Cython hash tables that assume keys are pre-hashed
awesome-cython - A curated list of awesome Cython resources. Just a draft for now.
arkouda - Arkouda (αρκούδα): Interactive Data Analytics at Supercomputing Scale :bear:
Nuitka - Nuitka is a Python compiler written in Python. It's fully compatible with Python 2.6, 2.7, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, and 3.11. You feed it your Python app, it does a lot of clever things, and spits out an executable or extension module.
legate.sparse
matplotlibcpp17 - Alternative to matplotlibcpp with better syntax, based on pybind
HTR-solver - Hypersonic Task-based Research (HTR) solver for the Navier-Stokes equations at hypersonic Mach numbers including finite-rate chemistry for dissociating air and multicomponent transport.
epython - EPython is a typed-subset of the Python for extending the language new builtin types and methods
soleil-x - Soleil-X is a turbulence/particle/radiation solver written in the Regent language for execution with the Legion runtime.
avendish - declarative polyamorous cross-system intermedia objects