Zig self hosted compiler is now capable of building itself

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • zig

    General-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.

    As you might guess, the one that this milestone is celebrating is the LLVM backend, which is now able to compile the compiler itself, despite 3% of the behavior tests not yet passing.

    The new compiler codebase, which is written in Zig instead of C++, uses significantly less memory, and represents a modest performance improvement. There are 5 more upcoming compiler milestones which will have a more significant impact on compilation speed. I talked about this in detail last weekend at the Zig meetup in Milan, Italy. Loris is working on uploading the recording of that talk at this very moment.

    There are 3 main things needed before we can ship this new compiler to everyone.

    1. bug fixes

    2. improved compile errors

    3. implement the remaining runtime safety checks

    If you're looking forward to giving it a spin, subscribe to [this issue](https://github.com/ziglang/zig/issues/89) to be notified when it is landed in master branch.

  • v

    Simple, fast, safe, compiled language for developing maintainable software. Compiles itself in <1s with zero library dependencies. Supports automatic C => V translation. https://vlang.io

    A lot of the so-called Vlang controversy appears to have been disguised allegiance and competition between the newer languages. Looks more like various people defending their interests. Languages like Odin, Zig, Nim, Crystal were far older than Vlang. When Vlang came along and got a lot of sudden popularity and funding, looks like various competitors sought to bash it and hoped it would disappear.

    In addition, such detractors were acting like a brand new programming language would be a finished polished product from day 1, when that was not the case for their far older languages. For instance, Odin and Crystal are older than Vlang, but has been surpassed by it in various respects.

    And don't get me wrong, I like Odin. That's because Odin is among the newer breed of languages that have continued the trend in which Go started of non class-based OOP and more generalized OO, that are contenders to be alternatives to C/C++ (like Zig).

    In the case of Vlang, it has clearly been developing rapidly and consistently, and continually gaining in popularity. Simply looking at their releases (and release schedule) along with their documentation (which various newer contenders are lacking in even that), will show a lot of the controversy is without merit or distortions of language development reality.

    https://github.com/vlang/v/releases

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

  • wabt

    The WebAssembly Binary Toolkit

    You can also target C through Wasm.

    https://github.com/WebAssembly/wabt/tree/main/wasm2c

  • linguist

    Language Savant. If your repository's language is being reported incorrectly, send us a pull request!

    I believe this is the syntax highlighter repo: https://github.com/github/linguist/blob/master/CONTRIBUTING....

  • ixy-languages

    A high-speed network driver written in C, Rust, C++, Go, C#, Java, OCaml, Haskell, Swift, Javascript, and Python

    Reference counting is a GC algorithm.

    I wouldn't buy into much Apple marketing regarding its performance though,

    https://github.com/ixy-languages/ixy-languages

    It makes sense in the context of tracing GC having been a failure in Objective-C due to its C semantics, while automating Cocoa's retain/release calls was much safer approach. Swift naturally built on top of that due to interoperability with Objective-C frameworks.

    Nim has taken other optimizations into consideration, however only in the new ORC implementation.

    Still, all of them are much better than managing memory manually.

  • septum

    Context-based code search tool

    Ada is another option without a GC. I wrote a search tool for large codebases with it (https://github.com/pyjarrett/septum), and the easy multitasking and pinning to CPUs allows you to easily go wide if the problem you're solving supports it.

    There's very little allocation since it supports returning VLAs (like strings) from functions via a secondary stack. Its Alire tool does the toolchain install and provides package management, so trying the language out is super easy. I've done a few bindings to things in C with it, which is ridiculously easy.

  • Rustlings

    :crab: Small exercises to get you used to reading and writing Rust code!

    > Suppose I wanted to try learning Rust again; is there a resource for someone with a lot of (hobbyist) programming experience, and experience with low level languages and memory management (e.g. C), but not complicated low-level languages, like C++?

    The official Rust book is targeted at novices with some programming experience. There's also Rustlings https://github.com/rust-lang/rustlings for a more practical approach.

    > When I tried to work with Rust a few years ago I found it utterly impenetrable. I just had no idea what the borrow checker was doing, did not understand what the error messages meant, and honestly couldn't even understand the documentation or the tutorials on the subject

    The compiler diagnostics have improved a lot over time. It's quite possible that some of the examples you have in mind return better error messages.

    > in Rust it's always been a nightmare for me. I just really don't grok the "lifetime" concept at all, it feels like I'm trying to learn academic computer science instead of a programming language.

    Academic computer science calls lifetimes "regions", which is perhaps a clearer term. It's a fairly clean extension of the notion of scope that you'd also find in languages like C or Zig. It's really not that complex, even though the Rust community sometimes finds it difficult to convey the right intuitions.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

  • Alignment

    Example application highlighting the alignment features of modern C++

    To clarify I meant as int8/int16's are packed in structs to compare to struct bitfields. Can't recall about the stack rules. Here's more discussion:

    http://www.catb.org/esr/structure-packing/

    https://github.com/Twon/Alignment/blob/master/docs/alignment...

    Also ARM for example doesn't have 8/16 bit registers so int8 or int16 will use a 32bit register:

    https://stackoverflow.com/a/23716920

    Curiosity got to me, perhaps Zig had improved significantly. So I compared the first benchmark I found (kostya/benchmarks/bf) with Zig with Nim. For the smaller input (bench.b) Zig did run with ~22% less RAM (about 20kB less).

    However, for the larger input (mandel.b) Nim+ARC used ~33% less RAM in safe mode: Nim 2.163mb -d:release; Zig 2.884mb -O ReleaseSafe; Zig 2.687mb -O ReleaseFast. The Nim requires 0.5mb less ram and the code is ~40% shorter. I don't have time to try out the Rust or Go versions though.

  • swc

    Rust-based platform for the Web

  • benchmarks

    Some benchmarks of different languages

    Yes, as far as I can tell [1]. It's a simple Tape algorithm. Neither had any crazy SIMD, threads, no custom containers, etc. They use almost the same function names (and look the same as the Go version too). I used `time -l` https://stackoverflow.com/a/30124747 for memory usage.

    Note I ran the benchmarks locally (MacBook Air M1) because the reported benchmark uses the older (default) Nim GC while I only use Nim+ARC. I also had to fix the Zig code and it took a few tries to get the signed/unsigned int conversions working. I tried tweaking flags for both a bit as well to see how stable they were. Zig's memory usage was pretty constant. Nim had a small speed vs memory tradeoff that could be tweak-able, but the defaults used the least memory.

    Overall I'd expect exact memory usage by language(s) to vary some by benchmark and one random benchmark isn't conclusive. Still I didn't find anything to indicate Zig is clearly better than other new generation languages. Manual memory management might actually be worse than letting a compiler manage it in some cases.

    1: https://github.com/kostya/benchmarks/tree/master/brainfuck

  • futhark

    :boom::computer::boom: A data-parallel functional programming language

  • Halide

    a language for fast, portable data-parallel computation

  • v-mode

    🌻 An Emacs major mode for the V programming language.

    In that category (of C/C++ alternatives) is also Vlang (https://vlang.io/).

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts