legion
zig
Our great sponsors
legion | zig | |
---|---|---|
11 | 816 | |
647 | 30,631 | |
2.2% | 5.2% | |
9.9 | 10.0 | |
16 days ago | 4 days ago | |
C++ | Zig | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
legion
- Legion 24.03.0 – Control Replication
-
Antithesis of a One-in-a-Million Bug: Taming Demonic Nondeterminism
I work on a distributed runtime system for heterogeneous supercomputers [1].
As an example of the sort of bug we regularly deal with, I am at this exact moment tracking down a freeze that occurs on 8,192 nodes of a supercomputer [2]. That means I'm using about 64,000 GPUs and about half a million CPU cores. The smallest node count I've seen my issue is 2,048 nodes and at that scale it only happens about 10% of the time.
We've been debating internally whether Antithesis could help us or not. On the one hand, the fuzzing to explore the state space, and deterministic reproduction, are exactly what we want. On the other hand, we believe our state space is much larger than what you see in a typical distributed database. (And not just because of the sheer scale of things, but even on a single node we have state machines with order hundreds to thousands of states in them.) Based on the post here and the "scenario" count explored in CouchDB, I'm not convinced you'd be able to handle us. :-)
I'd be curious what you think. Happy to discuss here, or contact info in profile.
[1]: https://legion.stanford.edu/
[2]: https://www.olcf.ornl.gov/frontier/
-
Progress on No-GIL CPython
Parallelism in CS is a bit like security in CS. People know it matters in the abstract senses but you really only get into it if you look for the training specifically. We're getting better at both over time: just as more languages/libraries/etc. are secure by default, more now are parallel by default. There's a ways to go, but I'm glad we didn't do this prematurely, because the technology has improved a lot in the last decade. Look for example at what we can do (safely!) with Rayon in Rust vs (unsafely!) with OpenMP in C++.
And there are things even further afield like what I work on [1][2][3].
[1]: https://legion.stanford.edu/
[2]: https://regent-lang.org/
[3]: https://github.com/nv-legate/cunumeric
-
Mojo is now available on Mac
Chapel has at least several full-time developers at Cray/HPE and (I think) the US national labs, and has had some for almost two decades. That's much more than $100k.
Chapel is also just one of many other projects broadly interested in developing new programming languages for "high performance" programming. Out of that large field, Chapel is not especially related to the specific ideas or design goals of Mojo. Much more related are things like Codon (https://exaloop.io), and the metaprogramming models in Terra (https://terralang.org), Nim (https://nim-lang.org), and Zig (https://ziglang.org).
But Chapel is great! It has a lot of good ideas, especially for distributed-memory programming, which is its historical focus. It is more related to Legion (https://legion.stanford.edu, https://regent-lang.org), parallel & distributed Fortran, ZPL, etc.
-
Announcing Chapel 1.32
I should also note that there is Pygion if you want to use Python. Not a lot of great reference material right now, but there's the paper:
https://legion.stanford.edu/pdfs/pygion2019.pdf
And code samples:
https://github.com/StanfordLegion/legion/tree/stable/binding...
-
Is anyone using PyPy for real work?
We use PyPy for performing verification of our software stack [1], and also for profiling tools [2]. The verification tool is basically a complete reimplementation of our main product, and therefore encodes a massive amount of business logic (and therefore difficult to impossible to rewrite in another language). As with other users, we found the switch to PyPy was seamless and provides us with something like a 2.5x speedup out of the box, with (I think) higher speedups in some specific cases.
We eventually rewrote the profiler tool in Rust for additional speedups, but as mentioned for the verification engine, it's probably too complicated to ever do that so we really appreciate drop-in tools like PyPy that can speed up our code.
[1]: https://github.com/StanfordLegion/legion/blob/master/tools/l...
[2]: https://github.com/StanfordLegion/legion/blob/master/tools/l...
-
Make your programs run faster by better using the data cache (2020)
Legion is also doing something like that: https://legion.stanford.edu/
-
Is Parallel Programming Hard, and, If So, What Can You Do About It? [pdf]
If you really want to dig into it you can read up on the tutorials and/or papers from the Legion project: https://legion.stanford.edu/
But briefly, these task-based programs preserve sequential semantics. That means (whatever the system actually does when running your program), as long as you follow the rules, the parallelism should be invisible to the execution of the program.
-
Ask HN: Who is hiring? (September 2022)
Computer Science Research Dept., SLAC National Accelerator Laboratory | Research Scientist / Engineer | Menlo Park, CA or REMOTE, VISA | Full Time
We're a research group within SLAC, headed by Alex Aiken (https://theory.stanford.edu/~aiken/). We focus on fundamental CS research that has the potential to impact science, mainly in the areas of high-performance and distributed computing, programming languages, compilers, networks, operating systems, etc. One of our major projects is Legion, a forward-looking programming system for distributed computing (https://legion.stanford.edu/). Legion has been used to create new programming languages (https://regent-lang.org/), seamless distributed NumPy (https://developer.nvidia.com/cunumeric), and a drop-in replacement for Keras and PyTorch (https://flexflow.ai/), among many other things.
We are looking for strong scientists and engineers to join our group. For clarity (because these terms vary by industry/company), scientists mainly focus on producing research results (e.g., papers and research software) while engineers mainly focus on software development and deliverables (e.g., system or application implementation). For scientist positions please expect to provide a CV with relevant publications.
The official application links are below, but please feel free to contact me directly if you have questions. (My HN username @slac.stanford.edu)
Scientist (Computer Science):
https://erp-hprdext.erp.slac.stanford.edu/psp/hprdext/EMPLOY...
Engineer (Computer Science):
https://erp-hprdext.erp.slac.stanford.edu/psp/hprdext/EMPLOY...
We've had some reports that the application site doesn't work well in Google Chrome. You might want to apply in Firefox.
-
The Underwhelming Impact of Software Engineering Research (April 2022)
There are some points in the middle, but it's rare. I worked on one of these [1]. We've been building the system for just over ten years, and are starting to see some truly killer apps being built on top of it [2, 3].
While it has some great benefits once you arrive, the upfront costs are enormous. You basically need to find a funding source (or sources) that will pay for this product while you're building it. Also, in order for the research payoff to be worth it, you need both the product itself, and subsequent innovations it enables, to be research-worthy. Not all areas of research can support this. On top of it all, even when you do this, you'll still spend years of effort in activities that are essentially not research. You're basically responsible for all of your own customer support, sales, marketing, etc.---like a startup, but without the financial upside if you succeed. Yes there is recognition and so on, but the payoffs aren't as dramatic. Most people aren't ready to commit to this path.
Keep in mind that you can't build this in 5 years either. So a single generation of PhD students can't get it done. The only reason we were successful is because the key staff on the project stuck around for 5+ years after their PhDs because we all believed in doing the work.
Given all that, I don't hold it against people at all who just want to build prototypes and then move on to the next thing. It's way less risky and higher reward relative to the costs.
[1]: https://legion.stanford.edu/
[2]: https://flexflow.ai/
[3]: https://developer.nvidia.com/cunumeric
zig
-
Memory-mapped IO registers in Zig. (2021)
There is an issue proposing this approach: https://github.com/ziglang/zig/issues/4284
- Zig Programming Language
- Zig Language 0.12 Release
-
Zig 0.12.0 Release Notes
https://github.com/ziglang/zig/issues/224
e.g.:
> > When debugging/prototyping, it's useful to comment out a line without having to refactor, e.g.
-
How to Write a PHP Extension with Zig?
When writing code in a scripting language, sometimes you need that extra bit of performance (or maybe an async feature from Zig).
-
Bun - The One Tool for All Your JavaScript/Typescript Project's Needs?
NodeJS is by no means a slow runtime, it wouldn’t be so popular if it was. But compared to Bun, it’s slow. Bun was built from the ground up with speed in mind, using both JavascriptCore and Zig. The Bun team spent an enormous amount of time and energy trying to make Bun fast, including lots of profiling, benchmarking, and optimizations.
-
Bun 1.1
ntdll.dll!RtlUserThreadStart()
There are valid reasons to use APIs from NTDLL. Where I disagree with zig#1840 is the idea that it is always better to use NTDLL versions of API. Every other software ecosystem uses the standard Win32 APIs and diverging from that without a good reason seems like a good way to have unexpected behavior. One concrete example is most users and programmers expect Windows to redirect some file system paths when running on WOW64. But this is implemented in Kernel32, not ntdll.
https://github.com/ziglang/zig/issues/11894
- Zig, Rust, and Other Languages
-
Nanos – A Unikernel
Zig also has an IRC channel on libera (#zig) that is moderated by Andrew Kelley.[1]
[1] https://github.com/ziglang/zig/wiki/Community
- Ask HN: What Underrated Open Source Project Deserves More Recognition?
What are some alternatives?
pldb - PLDB: a Programming Language Database. A computable encyclopedia about programming languages.
Nim - Nim is a statically typed compiled systems programming language. It combines successful concepts from mature languages like Python, Ada and Modula. Its design focuses on efficiency, expressiveness, and elegance (in that order of priority).
preshed - 💥 Cython hash tables that assume keys are pre-hashed
Odin - Odin Programming Language
arkouda - Arkouda (αρκούδα): Interactive Data Analytics at Supercomputing Scale :bear:
v - Simple, fast, safe, compiled language for developing maintainable software. Compiles itself in <1s with zero library dependencies. Supports automatic C => V translation. https://vlang.io
legate.sparse
rust - Empowering everyone to build reliable and efficient software.
HTR-solver - Hypersonic Task-based Research (HTR) solver for the Navier-Stokes equations at hypersonic Mach numbers including finite-rate chemistry for dissociating air and multicomponent transport.
go - The Go programming language
soleil-x - Soleil-X is a turbulence/particle/radiation solver written in the Regent language for execution with the Legion runtime.
ssr-proxy-js - A Server-Side Rendering Proxy focused on customization and flexibility!