Faster Protocol Buffers

This page summarizes the projects mentioned and recommended in the original post on

Our great sponsors
  • - Download’s Tech Salary Report
  • SonarLint - Clean code begins in your IDE with SonarLint
  • Scout APM - Less time debugging, more time building
  • exp-lazyproto

    Experimental fast implementation of Protobufs in Go

    Article author here, good to see it on HN, someone else has submitted it (thanks :-)).

    If you are interested in the topic you may be also interested in a research library I wrote recently:, which among other things exploits the partial (de)serialization technique. This is just a prototype for now, one day I may actually do a production quality implementation.

  • FlatBuffers

    FlatBuffers: Memory Efficient Serialization Library

    My go-to for needing to deserialize structured data in a fast way these days is flatbuffers[1]. It compacts nicely and more importantly is zero copy/allocation(within the constraints of your language where possible) in deserialize. Which lets you do neat things like mmap it from disk.

    We used to store 20-30mb of animation data with it and we'd just mmap the whole file and let the kernel handle paging it in/out, worked great.

    I don't know how up to date their benchmarks[2] are but my experience has been that it beats almost every other off-the-shelf solution(other than maybe capn-proto which has some similar properties).




    Download’s Tech Salary Report. Median salaries, most in-demand technologies, state of the remote work... all you need to know your worth on the market by tech recruitment platform

  • oteps

    OpenTelemetry Enhancement Proposals

    This. The statelessness of the OTLP is by design. I did consider stateful designs with e.g. shared state dictionary compression but eventually chose not to, so that the intermediaries can remain stateless.

    An extension to OTLP that uses shared state (and columnar encoding) to achieve more compact representation and is suitable for the last network leg in the data delivery path has been proposed and may become a reality in the future:

  • OK, but I just want readers to be aware that the whole idea that it could take five minutes to parse a million protobufs is completely preposterous. I reimplemented their benchmark just now and it runs at roughly 8 million protos per second, orders of magnitude faster than they state, and I didn't even do anything to optimize it.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts