qbe-rs
Som
qbe-rs | Som | |
---|---|---|
32 | 8 | |
75 | 24 | |
- | - | |
3.3 | 0.0 | |
about 1 year ago | about 2 years ago | |
Rust | C++ | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
qbe-rs
- QBE – Compiler Back End
-
Ask HN: LLVM versus WASM?
There is likely no general anwer to this question. LLVM and WASM are sufficiently different technologies for different purposes. The original purpose of WASM was to enable code of statically compiled languages like C++ or Rust to run in the browser, whereas LLVM is a huge framework mostly useful as a compiler backend for a multitude of architectures. I'm aware that WASM is more and more used as a general purpose managed runtime (like the .NET ECMA-335 Common Language Infrastructure). And don't forget that there are much leaner, but still decent alternatives to LLVM, such as QBE (https://c9x.me/compile/) or ECS (https://github.com/EigenCompilerSuite/). So the answer to your question heavily depends on what you actually want to implement.
-
CBMC: C bounded model checker. (2021)
Another problem with LLVM I’ve heard about is that it’s intermediate language or API or something is a moving, informally-specified target. People who know LLVM internals might weigh in on that claim. If true, it’s actually easier to target C or a subset of Rust just because it’s static and well-understood.
Two projects sought to mitigate these issues by going in different directions. One was a compiler backend that aimed to be easy to learn with well-specified IL. The other aimed to formalize LLVM’s IL.
http://c9x.me/compile/
https://github.com/AliveToolkit/alive2
There have also been typed, assembly languages to support verification from groups like FLINT. One can also compile language-specific analysis with a certified to LLVM IL compiler. Integrating pieces from different languages can have risks. That (IIRC) is being mitigated by people doing secure, abstract compilation.
-
Odin Programming Language
> I think it uses a different backend than LLVM
harec uses https://c9x.me/compile/
-
Frontend for GCC?
Have you considered QBE?
-
What do C programmers think of the Zig language in 2023?
I really hope other new projects (like QBE) can really grow and become widely used
-
Toy C compiler, worth having an IR stage?
I really liked targetting QBE (https://c9x.me/compile/) as an IR, as it gave me lots of back-end optimisations for free 😊.
-
C or LLVM for a fast backend?
There is: QBE.
-
A whirlwind tour of the LLVM optimizer
You might be underestimating the accuracy of the CPU models LLVM uses.
For x86, the same data the code generator uses drives llvm-mca[1], which given a loop body can tell you the throughput, latency, and microarchitectural bottlenecks (decoding, ports, dependencies, store forwarding, etc.)—if not always precisely, then still not worse then IACA, the tool written at Intel by people who presumably knew how the CPUs work, unlike LLVM contributors and the rest of us who can only guess and measure. This separately for Haswell, Sandy Bridge, Skylake, etc.; not “x86”.
Now, is this the best model you can get? Not exactly[2], but it’s close enough to not matter. Do we often need machine code to be optimized to that level of detail? Perhaps not[3], and with that in mind you can shave at least a factor of ten off LLVM’s considerable bulk at the cost of 20—30% of performance[4,5]. But if you do want those as well, it seems that the complexity of LLVM is a fair price, or has the right order of magnitude at least.
(Frontend not included, C++ frontend required to bootstrap sold separately, at a similar markup compared to a C-only frontend with somewhat worse ergonomics.)
[1] https://llvm.org/docs/CommandGuide/llvm-mca.html
[2] https://www.uops.info/
[3] https://briancallahan.net/blog/20211010.html
[4] https://c9x.me/compile/
[5] https://drewdevault.com/talks/qbe.html
Som
-
Making Smalltalk on a Raspberry Pi (2020)
> Smalltalkish
Have a look at the SOM dialect which is successfully used in education: http://som-st.github.io/
Here is an implementation in C++ which runs on LuaJIT: https://github.com/rochus-keller/Som/
> unfortunately out of print book Smalltalk 80: the language and its implementation is commonly recommended
I assume you know this link: http://stephane.ducasse.free.fr/FreeBooks/BlueBook/Bluebook....
Here is an implementation in C++ and Lua: https://github.com/rochus-keller/Smalltalk
- Do transpilers just use a lot of string manipulation and concatenation to output the target language?
-
Ask HN: Admittedly Useless Side Projects?
- https://github.com/rochus-keller/Smalltalk/ Parser, code model, interpreter and navigable browser for the original Xerox Smalltalk-80 v2 sources and virtual image file
- https://github.com/rochus-keller/Som/ Parser, code model, navigable browser and VM for the SOM Smalltalk dialect
- https://github.com/rochus-keller/Simula A Simula 67 parser written in C++ and Qt
> do you regret those endeavours?
No, not in any way; the projects were very entertaining and gave me interesting insights.
-
Ask HN: Recommendation for general purpose JIT compiler
If your DSL is statically typed then I recommend that you have a look at the Mono CLR; it's compatible with the ECMA-335 standard and the IR (CIL) is well documented, even with secondary literature.
If your DSL is dynamically typed I recommend LuaJIT; the bytecode is lean and documented (not as good as CIL though). LuaJIT also works well with statically typed languages, but Mono is faster in the latter case. Even if it was originally built for Lua any compiler can generate LuaJIT bytecode.
Both approaches are lean (Mono about 8 MB, LuaJIT about 1 MB), general purpose, available on many platforms and work well (see e.g. https://github.com/rochus-keller/Oberon/ and https://github.com/rochus-keller/Som/).
-
When is Smalltalk's speed an issue?
At the latest when you run a benchmark suite like Are-we-fast-yet; here are some measurment results: http://software.rochus-keller.info/are-we-fast-yet_crystal_lua_node_som_pharo_i386_results_2020-12-29.pdf. See also https://github.com/rochus-keller/Som/ and https://github.com/rochus-keller/Smalltalk.
-
LuaJIT for backend?
LuaJIT is well suited as a backend/runtime environment for custom languages; I did it several times (see e.g. https://github.com/rochus-keller/Smalltalk, https://github.com/rochus-keller/Som/, https://github.com/rochus-keller/Oberon/). I also implemented a bit of infrastructure to ease the reuse: https://github.com/rochus-keller/LjTools. LuaJIT has some limitations though; if you require closures you have to know that the corresponding LuaJIT FNEW bytecode is not yet supported by the JIT, i.e. switches to the interpreter; as a work-around I implemented my own closures; LuaJIT also doesn't support multi-threading, but co-routines; and there is no debugger, and the infrastructure to implement one has limitations (i.e. performance is low when running to breakpoints). For most of my projects this was no issue. Recently I switched to CIL/Mono for my Oberon+ implementation which was a good move. But still I consider LuaJIT a good choice if you can cope with the mentioned limitations. The major advantage of LuaJIT is the small footprint and impressive performance for dynamic languages.
-
Optimizing an old interpreted language: where to begin?
One option is to leverage someone else's JIT: you could, for example, rewrite the interpreter to transpile to Lua source, which is then run in LuaJIT. There's a Smalltalk dialect which does this successfully; the Lua version runs in 1/12th the time of the C interpreted version. https://github.com/rochus-keller/Som You can use LuaJIT's FFI to call back into the Stunt server, or else just rewrite it completely in Lua --- large parts of the Stunt server will just go away in a native Lua implementation (e.g. the object database is just a table). Javascript would be another candidate for this.
-
JITted lang which is faster than C?
This is a completely different kind of measurement; unfortunately this is not clear enough from my Readme. I wanted to find out, how well my naive Bluebook interpreter performs on LuaJIT (using my virtual meta tracing approach) compared to Cog, which is a dedicatd Smalltalk VM optimized with whatever genious approaches over two decades (or even longer considering the long experience record by Elliot). This experiment continues in https://github.com/rochus-keller/Som, because I didn't want to modify the original Smalltalk image. I found that my naive LuaJIT based approach is about factor seven behind the highly optimized Cog/Spur, and further improvements would require similar optimization tricks as in the latter.
What are some alternatives?
mir - A lightweight JIT compiler based on MIR (Medium Internal Representation) and C11 JIT compiler and interpreter based on MIR
Smalltalk - Parser, code model, interpreter and navigable browser for the original Xerox Smalltalk-80 v2 sources and virtual image file
minivm - A VM That is Dynamic and Fast
rockstar - Makes you a Rockstar C++ Programmer in 2 minutes
ubpf - Userspace eBPF VM
c4 - C in four functions
sljit - Platform independent low-level JIT compiler
cproc - C11 compiler (mirror)
simplelanguage - A simple example language built using the Truffle API.
well - The Future of Assembly Language. https://wellang.github.io/well/
Oberon - Oberon parser, code model & browser, compiler and IDE with debugger, and an implementation of the Oberon+ programming language