zig VS go

Compare zig vs go and see what are their differences.


General-purpose programming language and toolchain for maintaining robust, optimal, and reusable software. (by ziglang)


The Go programming language (by golang)
Our great sponsors
  • WorkOS - The modern API for authentication & user identity.
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • LearnThisRepo.com - Learn 300+ open source libraries for free using AI.
zig go
801 2047
28,854 117,952
4.0% 1.1%
9.9 9.9
4 days ago 5 days ago
Zig Go
MIT License BSD 3-clause "New" or "Revised" License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.


Posts with mentions or reviews of zig. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-25.
  • Asynchronous Clean-Up (in Rust)
    5 projects | news.ycombinator.com | 25 Feb 2024
    I have never used it directly, take what I say with a grain of salt.

    As far as I know at least part of the idea was to eliminate the function coloring problem by letting the compiler do some nifty compile-time deductions. This had some issues (I don't know if this is still planned, it seems like the kind of thing that should not work in practice). Additionally, there were all sorts of hard technical issues with LLVM, debugging, etc.

    I recommend checking the issue tracker, eg. https://github.com/ziglang/zig/issues/6025

    I personally don't understand the domain well enough at all, but honestly, I feel like (if possible) Zig should try to double down on its allocator approach.

    Instead of trying to use some compile-time deduction magic explicitly pass around an "async runtime/executor" struct which you explicitly have to interact with. Why not?

  • Show HN: Tokamak – A Dependency Injection-Centric Server-Side Framework for Zig
    6 projects | news.ycombinator.com | 5 Feb 2024
    Atop your readme, you point out that nginx or another reverse proxy should be used. Kudos for that.

    As for performance, I'd be curious what gains you get using `std.http.Server` with keepalive and a threadpool. Possibly you can re-use your ThreadContext - having 1 per thread in the threadpool that you can re-using. `std.Thread.Pool` is also very poorly tuned for a large number of small batch jobs, but that's a place to start.

    [1] https://github.com/ziglang/zig/blob/b3aed4e2c8b4d48b8b12f606...

    6 projects | news.ycombinator.com | 5 Feb 2024
    Yes, fundamentally. In Rust if you take a parameter of generic type T without any bounds, you cannot call anything on it except for things which are defined for all types. If you specify bounds, only things required by the bounds can be called (+ the ones for all types). Another difference is where you get an error when you try pass something which doesn't adhere to a certain trait. In Rust you will get an error at the call site, not at the place of use (except if you don't specify any bounds).

    Zig is doing just fine without any trait mechanism and it simplifies the language a lot but it does come up from time to time. The usual solution is to just get type information via @typeInfo and error out if the type is something you're not expecting [0]. Not everybody is happy about it though [1] because, among other things, it makes it more difficult to discover what the required type actually is.

    [0] https://github.com/ziglang/zig/blob/b3aed4e2c8b4d48b8b12f606...

    [1] https://github.com/ziglang/zig/issues/17198

  • New Linux glibc flaw lets attackers get root on major distros
    7 projects | news.ycombinator.com | 4 Feb 2024
    It's not so unusual to write the C runtime library in a different language.

    E.g. Zig is getting a libc written in Zig:


    Rust would work too of course.

  • Zig Roadmap 2024 [video]
    6 projects | news.ycombinator.com | 27 Jan 2024
    Hi, core team member here (I'm quoted in a parent comment!). The problem with LLVM is not that optimization is slow - it's perfectly acceptable for release builds to take arbitrarily long for optimal binaries. The problem is how long it takes to emit debug builds.

    Take building the Zig compiler itself in Debug mode. This process takes about 30 seconds running through the Zig pipeline (semantic analysis and generating LLVM IR), and then 90 seconds just spent waiting for LLVM to emit the binary. OTOH, when using our self-hosted x86_64 backend (which is now capable of building the compiler, although is incomplete enough that it's not necessarily integrated into our development cycle quite yet), that 30 seconds is essentially the full build (there are a couple of extra seconds on the end flushing the ELF file).

    I can tell you from first-hand experience that when fixing bugs, a huge amount of time is wasted just waiting for the compiler to build - lots of bugs can be solved with relative ease, but we need to test our fixes! Rebuilds are also made more common by the fact that LLVM has an unfortunate habit of butchering the debug information for some values even in debug builds, so we often have to rebuild with debug prints added to understand a problem. Making rebuilds 75% faster by just ditching LLVM would make a huge difference. Introducing incremental compilation (which we're actively working on) would make these rebuilds under a second, which would improve workflows a crazy amount. This would hugely increase our development velocity wrt both bugfixes and proposal implementation.

    It's also important to note that we have quite a few compiler bugs which are [caused by upstream LLVM bugs](https://github.com/ziglang/zig/issues?q=is%3Aissue+is%3Aopen...). LLVM often ships with regressions which we report before releases come out and they simply don't fix. In the long term, eliminating the use of LLVM as our main code generation backend will mean that all bugs encountered are our own, and thus can be solved more easily.

    6 projects | news.ycombinator.com | 27 Jan 2024
    > If anything this will further worsen LLVM-powered build-times, surely? What's the motivation here?

    The key motivation is that this will allow Zig to drop its dependencies on the LLVM libraries, instead using a separate LLVM compilation to compile the bitcode file. This is nice because it simplifies the build process and drops the Zig compiler binary size by a full order of magnitude - see https://github.com/ziglang/zig/issues/16270 for more deatils on that. It also allows us to implement incremental compilation on the bitcode file itself to drop compile times a little, which isn't really possible to do through the LLVM API since it doesn't implement certain operations.

    In terms of speed, there's no reason to expect this will worsen our build times; in fact, we expect it will be faster. As with any common C++ API, LLVM's IRBuilder comes with a lot of overhead from how LLVM is written. What we're going to do here is essentially the same work that IRBuilder is doing, but in our own code, for which we will be focusing on performance.

    You can find more details on this at https://github.com/ziglang/zig/issues/13265.

    > ...but that doesn't mean LLVM was easy to develop.

    To be clear, we aren't saying it will be easy to reach LLVM's optimization capabilities. That's a very long-term plan, and one which will unfold over a number of years. The ability to use LLVM is probably never going away, because there might always be some things it handles better than Zig's own code generation. However, trying to get there seems a worthy goal; at the very least, we can get our self-hosted codegen backends to a point where they perform relatively well in Debug mode without sacrificing debuggability.

    6 projects | news.ycombinator.com | 27 Jan 2024
    Thanks for the detailed reply.

    > You can find more details on this at https://github.com/ziglang/zig/issues/13265.

    Thanks for the link, my thoughts mirror those of certik in the thread, which Andrew answered well.

    > at the very least, we can get our self-hosted codegen backends to a point where they perform relatively well in Debug mode without sacrificing debuggability

    Perhaps a useful point of comparison: the lightweight qbe C compiler achieved compile times of around a quarter that of GCC and Clang, with the generated code taking very roughly 170% as long to execute as the code from GCC or Clang. qbe has roughly 0.1% the lines of code as those 'big' compilers. [0] This should presumably be possible for Zig too, and could be a big win for Zig developers.

    Closing the performance gap with LLVM though would presumably be extremely challenging and, respectfully, I can't see the Zig project achieving this. Compiler optimisation seems to be a game of diminishing returns. Even if this were achieved, optimised compilation would surely be much slower than unoptimised.

    [0] https://archive.fosdem.org/2022/schedule/event/lg_qbe/attach... (Relevant discussion: https://news.ycombinator.com/item?id=11555527 )

  • Passing nothing is surprisingly difficult
    2 projects | news.ycombinator.com | 16 Jan 2024
  • Speed up your code: don't pass structs bigger than 16 bytes on AMD64
    3 projects | news.ycombinator.com | 4 Jan 2024
    > I think this is the same in the still experimental Carbon

    Apparently, it's not (anymore?):


    3 projects | news.ycombinator.com | 4 Jan 2024
    Pass by value / pass by ref is quite a bit of mental overhead as it effectively affects your ABI/API. Zig tries to not force this so as long as you "pass by value", the compiler can actually decide to pass it by reference. It does expose this kind of footgun though https://github.com/ziglang/zig/issues/5973#issuecomment-1330...


Posts with mentions or reviews of go. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-24.
  • Fast persistent recoverable log and key-value store
    3 projects | news.ycombinator.com | 24 Feb 2024
    Of course it does: just call TCPConn.SetNoDelay(false).

    See https://github.com/golang/go/blob/master/src/net/tcpsock.go#...

  • Go + Hypermedia - A Learning Journey (Part 1)
    6 projects | dev.to | 23 Feb 2024
    Go - programming language
  • Delving Deeper: Enriching Microservices with Golang with CloudWeGo
    7 projects | dev.to | 22 Feb 2024
    Built for the modern development landscape by embracing both Golang and Rust, CloudWeGo delivers advanced features and excellent performance metrics. As proof of its performance, benchmark tests have shown that Kitex surpasses gRPC by over 4 times in terms of QPS and latency, with a throughput increased by 51% - 70% in terms of QPS (Queries Per Second) and latency.
  • A beginner's guide to constant-time cryptography (2017)
    6 projects | news.ycombinator.com | 22 Feb 2024
    I noticed in July of 2022 that Go did exactly the vulnerable example and reported it to the security team.


    It was fixed as of Go 1.21 https://go.dev/doc/go1.21


    The article cites JavaScript, which is not constant time. There's no sure way to do constant time operations in JavaScript and thus no secure way to do crypto directly in Javascript. Browsers like Firefox depend on low level calls which should be implemented in languages that are constant time capable.

    JavaScript needs something like constant time WASM in order to do crypto securely, but seeing the only constant time WASM project on GitHub has only 16 stars and the last commit was 2 years ago, it doesn't appear to have much interest. https://github.com/WebAssembly/constant-time

    However, for JavaScript, I recommend Paul's library Noble which is "hardened to be algorithmically constant time". It is by far the best library available for JavaScript. https://github.com/paulmillr/noble-secp256k1

  • Maybe Everything Is a Coroutine
    3 projects | news.ycombinator.com | 14 Feb 2024
    > Channels are specifically designed to be a high-speed data bus between goroutines, rather than ever use more expensive and less safe shared memory.

    What do you mean? Shared memory is not more expensive. Memory is memory, it's either cached on your core or not. In fact, Go still has to issue fence instructions to ensure that the memory it observes after a channel read is sequenced after any writes to that memory, so it's at best the same cost you'd have with other forms of inter-thread communication in any language.

    Anyway, even that is missing the point. Go still shares memory if you used a reference type, and most types in Go end up being reference types, because it's the only way to have a variable-sized data structure (and while we're at it, string is the only variable-sized data structure that's also immutable).

    The bigger problem is that Go doesn't enforce thread safety. Channels only make communication safe if you send types that don't contain any mutable references... but Go doesn't give you any way to define your own immutable types. That basically limits you to just string. Instead people send slices, maps, pointers to structs, interfaces, etc. and those are all mutable and Go does nothing to enforce that you didn't mutate them.

    Even if all of that somehow wasn't true, many parallelism patterns simply don't map well to channels, so you still end up with mutexes in many parts of real world projects. Even if you don't see the mutexes, they're in your libraries. For example, Go's http.Transport contains a connection pool, but it uses mutexes instead of channels because even the Go team knows that mutexes still make sense for many real-world patterns.


    This whole "channels make Go safe" myth has to stop. It's confused a generation of Go programmers about the actual safety (and apparently performance) tradeoffs of channels. They do not make Go safer (mutable references are still mutable after being sent on a channel), they do not make it faster (the memory still has to be fenced), and heck while we're at it, they do not even make it simpler ("idiomatic" use of channels introduces many ways that goroutines can deadlock, and deadlock-free use of channels is much more complicated and less idiomatic).

    The most useful thing about channels is that you can select{} on multiple of them so they partly compensate for Go's limitations around selecting on futures in general. They're a poor substitute when you actually needed to select on something like IO, where io.Reader/Writer still don't interact with select, channels, or even cancellation directly.

  • Context Control in Go
    2 projects | news.ycombinator.com | 9 Feb 2024
  • Go 1.22 Release Notes
    5 projects | news.ycombinator.com | 6 Feb 2024
    Is the memory issue reported in go 1.21 version in linux OS resolved in go 1.22?


    5 projects | news.ycombinator.com | 6 Feb 2024
    Server 2012 was dropped in https://github.com/golang/go/issues/57004 and it's still in security support. My numbers are somewhat similar to @olivielpeau in that thread.
  • Copilot: Weapon For Laid Back Developers
    2 projects | dev.to | 6 Feb 2024
    In my example you can see some code written in Go and I have highlighted the function I am interested in. On the left side I have my Copilot Chat interface opened and all I have to do is type /explain and Copilot will explain what the function does. And since this is chat interface, it is of course possible to ask follow up questions. Pretty powerful, right?
  • Launch HN: Diversion (YC S22) – Cloud-Native Git Alternative
    5 projects | news.ycombinator.com | 22 Jan 2024
    Considering many languages' very own tooling (e.g. gofmt, syn) often have glaring gaps[1][2] in the understanding/roundtripping of the language's AST constructs, I would never be able to trust something like this to store my code.

    [1] https://github.com/golang/go/issues/20744

What are some alternatives?

When comparing zig and go you can also consider the following projects:

Nim - Nim is a statically typed compiled systems programming language. It combines successful concepts from mature languages like Python, Ada and Modula. Its design focuses on efficiency, expressiveness, and elegance (in that order of priority).

Odin - Odin Programming Language

v - Simple, fast, safe, compiled language for developing maintainable software. Compiles itself in <1s with zero library dependencies. Supports automatic C => V translation. https://vlang.io

TinyGo - Go compiler for small places. Microcontrollers, WebAssembly (WASM/WASI), and command-line tools. Based on LLVM.

rust - Empowering everyone to build reliable and efficient software.

rust - Rust for the xtensa architecture. Built in targets for the ESP32 and ESP8266

ssr-proxy-js - A Server-Side Rendering Proxy focused on customization and flexibility!

Angular - Deliver web apps with confidence 🚀

crystal - The Crystal Programming Language