encoding VS ndjson.github.io

Compare encoding vs ndjson.github.io and see what are their differences.

encoding

Go package containing implementations of efficient encoding, decoding, and validation APIs. (by segmentio)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
encoding ndjson.github.io
8 17
964 23
0.7% -
3.6 0.0
5 months ago 9 months ago
Go CSS
MIT License -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

encoding

Posts with mentions or reviews of encoding. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-07.
  • Handling high-traffic HTTP requests with JSON payloads
    5 projects | /r/golang | 7 Dec 2023
  • Rust vs. Go in 2023
    9 projects | news.ycombinator.com | 13 Aug 2023
    https://github.com/BurntSushi/rebar#summary-of-search-time-b...

    Further, Go refusing to have macros means that many libraries use reflection instead, which often makes those parts of the Go program perform no better than Python and in some cases worse. Rust can just generate all of that at compile time with macros, and optimize them with LLVM like any other code. Some Go libraries go to enormous lengths to reduce reflection overhead, but that's hard to justify for most things, and hard to maintain even once done. The legendary https://github.com/segmentio/encoding seems to be abandoned now and progress on Go JSON in general seems to have died with https://github.com/go-json-experiment/json .

    Many people claiming their projects are IO-bound are just assuming that's the case because most of the time is spent in their input reader. If they actually measured they'd see it's not even saturating a 100Mbps link, let alone 1-100Gbps, so by definition it is not IO-bound. Even if they didn't need more throughput than that, they still could have put those cycles to better use or at worst saved energy. Isn't that what people like to say about Go vs Python, that Go saves energy? Sure, but it still burns a lot more energy than it would if it had macros.

    Rust can use state-of-the-art memory allocators like mimalloc, while Go is still stuck on an old fork of tcmalloc, and not just tcmalloc in its original C, but transpiled to Go so it optimizes much less than LLVM would optimize it. (Many people benchmarking them forget to even try substitute allocators in Rust, so they're actually underestimating just how much faster Rust is)

    Finally, even Go Generics have failed to improve performance, and in many cases can make it unimaginably worse through -- I kid you not -- global lock contention hidden behind innocent type assertion syntax: https://planetscale.com/blog/generics-can-make-your-go-code-...

    It's not even close. There are many reasons Go is a lot slower than Rust and many of them are likely to remain forever. Most of them have not seen meaningful progress in a decade or more. The GC has improved, which is great, but that's not even a factor on the Rust side.

  • Quickly checking that a string belongs to a small set
    7 projects | news.ycombinator.com | 30 Dec 2022
    We took a similar approach in our JSON decoder. We needed to support sets (JSON object keys) that aren't necessarily known until runtime, and strings that are up to 16 bytes in length.

    We got better performance with a linear scan and SIMD matching than with a hash table or a perfect hashing scheme.

    See https://github.com/segmentio/asm/pull/57 (AMD64) and https://github.com/segmentio/asm/pull/65 (ARM64). Here's how it's used in the JSON decoder: https://github.com/segmentio/encoding/pull/101

  • 80x improvements in caching by moving from JSON to gob
    6 projects | /r/golang | 11 Apr 2022
    Binary formats work well for some cases but JSON is often unavoidable since it is so widely used for APIs. However, you can make it faster in golang with this https://github.com/segmentio/encoding.
  • Speeding up Go's builtin JSON encoder up to 55% for large arrays of objects
    2 projects | news.ycombinator.com | 3 Mar 2022
    Would love to see results from incorporating https://github.com/segmentio/encoding/tree/master/json!
  • Fastest JSON parser for large (~888kB) API response?
    2 projects | /r/golang | 7 Jan 2022
    Try this one out https://github.com/segmentio/encoding it's always worked well for me
  • 📖 Go Fiber by Examples: Delving into built-in functions
    4 projects | dev.to | 24 Aug 2021
    Converts any interface or string to JSON using the segmentio/encoding package. Also, the JSON method sets the content header to application/json.
  • In-memory caching solutions
    4 projects | /r/golang | 1 Feb 2021
    If you're interested in super fast & easy JSON for that cache give this a try I've used it in prod & never had a problem.

ndjson.github.io

Posts with mentions or reviews of ndjson.github.io. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-11.
  • What the fuck
    2 projects | /r/programminghorror | 11 Apr 2023
    However, since every JSON document can be represented in a single line, something like newline-delimited JSON / JSON Lines feels like it would've been more suitable for that kind of data.
  • The XML spec is 25 years old today
    1 project | news.ycombinator.com | 10 Feb 2023
  • Consider Using CSV
    7 projects | news.ycombinator.com | 10 Dec 2022
    No one uses that format for streamed json, see ndson and jsonl

    http://ndjson.org/

    The size complaint is overblown, as repeated fields are compressed away.

    As other folks rightfully commented, csv is a mine field. One should assume every CSV file is broken in some way. They also don't enumerate any of the downsides of CSV.

    What people should consider is using formats like Avro or Parquet that carry their schema with them so the data can be loaded and analyzed without have to manually deal with column meaning.

  • DevTool Intro: The Algolia CLI!
    2 projects | dev.to | 15 Aug 2022
    What is ndjson? Newline delimited JSON is the format the Algolia CLI reads from and writes to files. This means that any command that passes ndjson formatted data as output or accepts it as input can be piped together with an Algolia CLI command! We’ll see more of this in the next example
  • On read of JSON file it loads the entire JSON into memory.
    1 project | /r/learnpython | 19 Jul 2022
    You might consider using json-lines format (also known as newline-delimited JSON), in which each line is a separate JSON document so they can be loaded individually.
  • How to format it as json?
    1 project | /r/golang | 27 Jun 2022
    The format you're getting is known as Newline-Delimited JSON. Instead of trying to parse the whole input and pass that to the JSON Decoder, you can use something like bufio.Scanner to get and parse it line by line.
  • Arrow2 0.12.0 released - including almost complete support for Parquet
    2 projects | /r/rust | 5 Jun 2022
    This is in oposition to NDJSON, which allows to split records without deserializing JSON itself, via e.g. read_lines. fwiw CSV suffers from the same problem as JSON - generally not possible to break into records without deserializing. It is worse than NDJSON because the character \n may appear at any position within an item, thus forbidding read_lines.
  • Processing large JSON files in Python without running out of memory
    1 project | /r/Python | 18 Mar 2022
    I've always seen it referred to as ndjson
  • Speeding up Go's builtin JSON encoder up to 55% for large arrays of objects
    2 projects | news.ycombinator.com | 3 Mar 2022
    I think this would be fine, as long as the CSV layer was still parsable using the RFC 4180, then you could still use a normal CSV parser to parse the CSV layer and a normal JSON parser to parse the JSON layer. My worry with your example is that it is nether format, so it will need custom serialisation and deserialisation logic as it is essentially a bran new format.

    https://datatracker.ietf.org/doc/html/rfc4180

    If you’re looking for line-oriented JSON, another option would be ndjson: http://ndjson.org/

  • IETF should keep XMPP as IM standard, instead of Matrix
    7 projects | news.ycombinator.com | 16 Jan 2022

What are some alternatives?

When comparing encoding and ndjson.github.io you can also consider the following projects:

sonic - A blazingly fast JSON serializing & deserializing library

ndjson - Streaming line delimited json parser + serializer

groupcache - Clone of golang/groupcache with TTL and Item Removal support

flatten-tool - Tools for generating CSV and other flat versions of the structured data

parquet-go - Go library to read/write Parquet files

miller - Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON

base64 - Faster base64 encoding for Go

babashka - A Clojure babushka for the grey areas of Bash (native fast-starting Clojure scripting environment) [Moved to: https://github.com/babashka/babashka]

buntdb - BuntDB is an embeddable, in-memory key/value database for Go with custom indexing and geospatial support

datasette - An open source multi-tool for exploring and publishing data

hilbert - Go package for mapping values to and from space-filling curves, such as Hilbert and Peano curves.

grop - helper script for the `gron | grep | gron -u` workflow