cyanide VS ndjson.github.io

Compare cyanide vs ndjson.github.io and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
cyanide ndjson.github.io
9 17
11 23
- -
3.2 0.0
10 months ago 9 months ago
Elixir CSS
Apache License 2.0 -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cyanide

Posts with mentions or reviews of cyanide. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-11-03.
  • Would you recommend JSON/CSV/Other for data storage in games?
    1 project | /r/gamedev | 23 Mar 2023
    No, it stands for "Binary JSON".
  • What is MongoDB ?
    2 projects | dev.to | 3 Nov 2022
    BSON specification
  • MUON: Compact and simple binary format, that uses gaps in Unicode encoding for markup
    1 project | /r/Python | 27 Jul 2022
    I recommend looking at https://ubjson.org and https://bsonspec.org , this will answer most of your questions.
  • I need a json file that includes all or most of the data types supported by MongoDB.
    1 project | /r/mongodb | 11 May 2022
    or https://bsonspec.org/
  • Minimizing the size of JSON by using CodingKey
    6 projects | /r/swift | 4 Mar 2022
    BJSON Binary JSON, with a Swift Library here and should be one for your other end
  • Basics of MongoDB
    3 projects | dev.to | 18 Nov 2021
    Starting this tutorial we specified that data in MongoDB is stored in collections. We also specified that in MongoDB we use syntax similar to JSON. That syntax is called "Binary JSON" or BSON. BSON is similar to JSON; but it's more like an encoded serialization of JSON. We can find useful information in the BSON website.
  • It's Time to Retire the CSV
    8 projects | news.ycombinator.com | 18 Aug 2021
    > I'm saying that when you decode an Avro document, the result that comes out (presuming you don't tell the Avro decoder anything special about custom types your runtime supports and how it should map them) is a JSON document.

    Semantic point: it's not a "document".

    There are tools which will decode Avro and output the data in JSON (typically using the JSON encoding of Avro: https://avro.apache.org/docs/current/spec.html#json_encoding), but the ADT that is created is by no means a JSON document. The ADT that is created has more complex semantics than JSON; JSON is not the canonical representation.

    > By which I don't mean JSON-encoded text, but rather an in-memory ADT that has the exact set of types that exist in JSON, no more and no less.

    Except Avro has data types that are not the exact set of types that exist in JSON. The first clue on this might be that the Avro spec includes mappings that list how primitive Avro types are mapped to JSON types.

    > Or, to put that another way, Avro is a way to encode JSON-typed data, just as "JSON text", or https://bsonspec.org/, is a way to encode JSON-typed data

    BSON, by design, was meant to be a more efficient way to encode JSON data, so yes, it is a way to encode JSON-typed data. Avro, however, was not defined as a way to encode JSON data. It was defined as a way to encode data (with a degree of specialization for the case of Hadoop sequence files, where you are generally storing a large number of small records in one file).

    A simple counter example: Avro has a "float" type, which is a 32-bit IEEE 754 floating point number. Neither JSON nor BSON have that type.

    Technically, JSON doesn't really have types, it has values, but even if you pretend that JavaScript's types are JSON's types, there's nothing "canonical" about JavaScript's types for Avro.

    Yes, you can represent JSON data in Avro, and Avro in JSON, much as you can represent data in two different serialization formats. Avro's data model is very much defined independently of JSON's data model (as you'd expect).

  • Sending 😀 in Go
    3 projects | dev.to | 13 Jul 2021
    For me, this exploration started when I was attempting to improve handling of Unicode surrogate pair values in the MongoDB Go driver's Extended JSON unmarshaler. The Extended JSON format is an extension to the standard JSON format that adds type information and allows deterministic conversion to and from BSON.

ndjson.github.io

Posts with mentions or reviews of ndjson.github.io. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-11.
  • What the fuck
    2 projects | /r/programminghorror | 11 Apr 2023
    However, since every JSON document can be represented in a single line, something like newline-delimited JSON / JSON Lines feels like it would've been more suitable for that kind of data.
  • The XML spec is 25 years old today
    1 project | news.ycombinator.com | 10 Feb 2023
  • Consider Using CSV
    7 projects | news.ycombinator.com | 10 Dec 2022
    No one uses that format for streamed json, see ndson and jsonl

    http://ndjson.org/

    The size complaint is overblown, as repeated fields are compressed away.

    As other folks rightfully commented, csv is a mine field. One should assume every CSV file is broken in some way. They also don't enumerate any of the downsides of CSV.

    What people should consider is using formats like Avro or Parquet that carry their schema with them so the data can be loaded and analyzed without have to manually deal with column meaning.

  • DevTool Intro: The Algolia CLI!
    2 projects | dev.to | 15 Aug 2022
    What is ndjson? Newline delimited JSON is the format the Algolia CLI reads from and writes to files. This means that any command that passes ndjson formatted data as output or accepts it as input can be piped together with an Algolia CLI command! We’ll see more of this in the next example
  • On read of JSON file it loads the entire JSON into memory.
    1 project | /r/learnpython | 19 Jul 2022
    You might consider using json-lines format (also known as newline-delimited JSON), in which each line is a separate JSON document so they can be loaded individually.
  • How to format it as json?
    1 project | /r/golang | 27 Jun 2022
    The format you're getting is known as Newline-Delimited JSON. Instead of trying to parse the whole input and pass that to the JSON Decoder, you can use something like bufio.Scanner to get and parse it line by line.
  • Arrow2 0.12.0 released - including almost complete support for Parquet
    2 projects | /r/rust | 5 Jun 2022
    This is in oposition to NDJSON, which allows to split records without deserializing JSON itself, via e.g. read_lines. fwiw CSV suffers from the same problem as JSON - generally not possible to break into records without deserializing. It is worse than NDJSON because the character \n may appear at any position within an item, thus forbidding read_lines.
  • Processing large JSON files in Python without running out of memory
    1 project | /r/Python | 18 Mar 2022
    I've always seen it referred to as ndjson
  • Speeding up Go's builtin JSON encoder up to 55% for large arrays of objects
    2 projects | news.ycombinator.com | 3 Mar 2022
    I think this would be fine, as long as the CSV layer was still parsable using the RFC 4180, then you could still use a normal CSV parser to parse the CSV layer and a normal JSON parser to parse the JSON layer. My worry with your example is that it is nether format, so it will need custom serialisation and deserialisation logic as it is essentially a bran new format.

    https://datatracker.ietf.org/doc/html/rfc4180

    If you’re looking for line-oriented JSON, another option would be ndjson: http://ndjson.org/

  • IETF should keep XMPP as IM standard, instead of Matrix
    7 projects | news.ycombinator.com | 16 Jan 2022

What are some alternatives?

When comparing cyanide and ndjson.github.io you can also consider the following projects:

BSONMap - Elixir package that applies a function to each document in a BSON file.

ndjson - Streaming line delimited json parser + serializer

naya - A fast streaming JSON parser written in Python

flatten-tool - Tools for generating CSV and other flat versions of the structured data

json - Strongly typed JSON library for Rust

miller - Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON

json - JSON for Modern C++

babashka - A Clojure babushka for the grey areas of Bash (native fast-starting Clojure scripting environment) [Moved to: https://github.com/babashka/babashka]

csvz - The hot new standard in open databases

datasette - An open source multi-tool for exploring and publishing data

Mongoose - MongoDB object modeling designed to work in an asynchronous environment.

grop - helper script for the `gron | grep | gron -u` workflow