encoding VS buntdb

Compare encoding vs buntdb and see what are their differences.

encoding

Go package containing implementations of efficient encoding, decoding, and validation APIs. (by segmentio)

buntdb

BuntDB is an embeddable, in-memory key/value database for Go with custom indexing and geospatial support (by tidwall)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
encoding buntdb
8 7
962 4,381
0.7% -
3.6 0.0
5 months ago 27 days ago
Go Go
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

encoding

Posts with mentions or reviews of encoding. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-07.
  • Handling high-traffic HTTP requests with JSON payloads
    5 projects | /r/golang | 7 Dec 2023
  • Rust vs. Go in 2023
    9 projects | news.ycombinator.com | 13 Aug 2023
    https://github.com/BurntSushi/rebar#summary-of-search-time-b...

    Further, Go refusing to have macros means that many libraries use reflection instead, which often makes those parts of the Go program perform no better than Python and in some cases worse. Rust can just generate all of that at compile time with macros, and optimize them with LLVM like any other code. Some Go libraries go to enormous lengths to reduce reflection overhead, but that's hard to justify for most things, and hard to maintain even once done. The legendary https://github.com/segmentio/encoding seems to be abandoned now and progress on Go JSON in general seems to have died with https://github.com/go-json-experiment/json .

    Many people claiming their projects are IO-bound are just assuming that's the case because most of the time is spent in their input reader. If they actually measured they'd see it's not even saturating a 100Mbps link, let alone 1-100Gbps, so by definition it is not IO-bound. Even if they didn't need more throughput than that, they still could have put those cycles to better use or at worst saved energy. Isn't that what people like to say about Go vs Python, that Go saves energy? Sure, but it still burns a lot more energy than it would if it had macros.

    Rust can use state-of-the-art memory allocators like mimalloc, while Go is still stuck on an old fork of tcmalloc, and not just tcmalloc in its original C, but transpiled to Go so it optimizes much less than LLVM would optimize it. (Many people benchmarking them forget to even try substitute allocators in Rust, so they're actually underestimating just how much faster Rust is)

    Finally, even Go Generics have failed to improve performance, and in many cases can make it unimaginably worse through -- I kid you not -- global lock contention hidden behind innocent type assertion syntax: https://planetscale.com/blog/generics-can-make-your-go-code-...

    It's not even close. There are many reasons Go is a lot slower than Rust and many of them are likely to remain forever. Most of them have not seen meaningful progress in a decade or more. The GC has improved, which is great, but that's not even a factor on the Rust side.

  • Quickly checking that a string belongs to a small set
    7 projects | news.ycombinator.com | 30 Dec 2022
    We took a similar approach in our JSON decoder. We needed to support sets (JSON object keys) that aren't necessarily known until runtime, and strings that are up to 16 bytes in length.

    We got better performance with a linear scan and SIMD matching than with a hash table or a perfect hashing scheme.

    See https://github.com/segmentio/asm/pull/57 (AMD64) and https://github.com/segmentio/asm/pull/65 (ARM64). Here's how it's used in the JSON decoder: https://github.com/segmentio/encoding/pull/101

  • 80x improvements in caching by moving from JSON to gob
    6 projects | /r/golang | 11 Apr 2022
    Binary formats work well for some cases but JSON is often unavoidable since it is so widely used for APIs. However, you can make it faster in golang with this https://github.com/segmentio/encoding.
  • Speeding up Go's builtin JSON encoder up to 55% for large arrays of objects
    2 projects | news.ycombinator.com | 3 Mar 2022
    Would love to see results from incorporating https://github.com/segmentio/encoding/tree/master/json!
  • Fastest JSON parser for large (~888kB) API response?
    2 projects | /r/golang | 7 Jan 2022
    Try this one out https://github.com/segmentio/encoding it's always worked well for me
  • 📖 Go Fiber by Examples: Delving into built-in functions
    4 projects | dev.to | 24 Aug 2021
    Converts any interface or string to JSON using the segmentio/encoding package. Also, the JSON method sets the content header to application/json.
  • In-memory caching solutions
    4 projects | /r/golang | 1 Feb 2021
    If you're interested in super fast & easy JSON for that cache give this a try I've used it in prod & never had a problem.

buntdb

Posts with mentions or reviews of buntdb. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-15.
  • PostgreSQL: No More Vacuum, No More Bloat
    6 projects | news.ycombinator.com | 15 Jul 2023
    Experimental format to help readability of a long rant:

    1.

    According to the OP, there's a "terrifying tale of VACUUM in PostgreSQL," dating back to "a historical artifact that traces its roots back to the Berkeley Postgres project." (1986?)

    2.

    Maybe the whole idea of "use X, it has been battle-tested for [TIME], is robust, all the bugs have been and keep being fixed," etc., should not really be that attractive or realistic for at least a large subset of projects.

    3.

    In the case of Postgres, on top of piles of "historic code" and cruft, there's the fact that each user of Postgres installs and runs a huge software artifact with hundreds or even thousands of features and dependencies, of which every particular user may only use a tiny subset.

    4.

    In Kleppmann's DDOA [1], after explaining why the declarative SQL language is "better," he writes: "in databases, declarative query languages like SQL turned out to be much better than imperative query APIs." I find this footnote to the paragraph a bit ironic: "IMS and CODASYL both used imperative query APIs. Applications typically used COBOL code to iterate over records in the database, one record at a time." So, SQL was better than CODASYL and COBOL in a number of ways... big surprise?

    Postgres' own PL/pgSQL [2] is a language that (I imagine) most people would rather NOT use: hence a bunch of alternatives, including PL/v8, on its own a huge mass of additional complexity. SQL is definitely "COBOLESQUE" itself.

    5.

    Could we come up with something more minimal than SQL and looking less like COBOL? (Hopefully also getting rid of ORMs in the process). Also, I have found inspiring to see some people creating databases for themselves. Perhaps not a bad idea for small applications? For instance, I found BuntDB [3], which the developer seems to be using to run his own business [4]. Also, HYTRADBOI? :-) [5].

    6.

    A usual objection to use anything other than a stablished relational DB is "creating a database is too difficult for the average programmer." How about debugging PostgreSQL issues, developing new storage engines for it, or even building expertise on how to set up the instances properly and keep it alive and performant? Is that easier?

    I personally feel more capable of implementing a small, well-tested, problem-specific, small implementation of a B-Tree than learning how to develop Postgres extensions, become an expert in its configuration and internals, or debug its many issues.

    Another common opinion is "SQL is easy to use for non-programmers." But every person that knows SQL had to learn it somehow. I'm 100% confident that anyone able to learn SQL should be able to learn a simple, domain-specific, programming language designed for querying DBs. And how many of these people that are not able to program imperatively would be able to read a SQL EXPLAIN output and fix deficient queries? If they can, that supports even more the idea that they should be able to learn something different than SQL.

    ----

    1: https://dataintensive.net/

    2: https://www.postgresql.org/docs/7.3/plpgsql-examples.html

    3: https://github.com/tidwall/buntdb

    4: https://tile38.com/

    5: https://www.hytradboi.com/

  • Is there a nice embedded json db, like PoloDB (Rust) for Golang
    8 projects | /r/golang | 5 Nov 2022
    https://github.com/tidwall/buntdb -> i think this one you might want
  • Open Source Databases in Go
    52 projects | /r/golang | 8 Jun 2022
    buntdb - Fast, embeddable, in-memory key/value database for Go with custom indexing and spatial support.
  • Alternative to MongoDB?
    9 projects | /r/golang | 12 May 2022
    BuntDB for NoSQL
  • Path hints for B-trees can bring a performance increase of 150% – 300%
    3 projects | news.ycombinator.com | 30 Jul 2021
    BuntDB [0] from @tidwall uses this package as a backing data structure. And BuntDB is in turn used by Tile38 [1]

    [0] https://github.com/tidwall/buntdb

  • The start of my journey learning Go. Any tips/suggestions would greatly appreciated!
    6 projects | /r/golang | 29 Jun 2021
  • In-memory caching solutions
    4 projects | /r/golang | 1 Feb 2021
    I've used BuntDB and had a great experience with it. It's basically just a JSON-based key-value store. I'm a huge fan of the developers other work (sjson, gjson, jj, etc) and stumbled on it while looking for a simple, embedded DB solution. It's not specifically a cache, though--just a simple DB, so you'd have to write the caching logic yourself.

What are some alternatives?

When comparing encoding and buntdb you can also consider the following projects:

sonic - A blazingly fast JSON serializing & deserializing library

bolt

groupcache - Clone of golang/groupcache with TTL and Item Removal support

badger - Fast key-value DB in Go.

parquet-go - Go library to read/write Parquet files

nutsdb - A simple, fast, embeddable, persistent key/value store written in pure Go. It supports fully serializable transactions and many data structures such as list, set, sorted set.

base64 - Faster base64 encoding for Go

go-memdb - Golang in-memory database built on immutable radix trees

hilbert - Go package for mapping values to and from space-filling curves, such as Hilbert and Peano curves.

goleveldb - LevelDB key/value database in Go.

go_serialization_benchmarks - Benchmarks of Go serialization methods

ledisdb - A high performance NoSQL Database Server powered by Go