ndjson.github.io VS arrow2

Compare ndjson.github.io vs arrow2 and see what are their differences.

arrow2

Transmute-free Rust library to work with the Arrow format (by jorgecarleitao)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
ndjson.github.io arrow2
17 25
23 1,071
- -
0.0 0.0
9 months ago 3 months ago
CSS Rust
- Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ndjson.github.io

Posts with mentions or reviews of ndjson.github.io. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-11.
  • What the fuck
    2 projects | /r/programminghorror | 11 Apr 2023
    However, since every JSON document can be represented in a single line, something like newline-delimited JSON / JSON Lines feels like it would've been more suitable for that kind of data.
  • The XML spec is 25 years old today
    1 project | news.ycombinator.com | 10 Feb 2023
  • Consider Using CSV
    7 projects | news.ycombinator.com | 10 Dec 2022
    No one uses that format for streamed json, see ndson and jsonl

    http://ndjson.org/

    The size complaint is overblown, as repeated fields are compressed away.

    As other folks rightfully commented, csv is a mine field. One should assume every CSV file is broken in some way. They also don't enumerate any of the downsides of CSV.

    What people should consider is using formats like Avro or Parquet that carry their schema with them so the data can be loaded and analyzed without have to manually deal with column meaning.

  • DevTool Intro: The Algolia CLI!
    2 projects | dev.to | 15 Aug 2022
    What is ndjson? Newline delimited JSON is the format the Algolia CLI reads from and writes to files. This means that any command that passes ndjson formatted data as output or accepts it as input can be piped together with an Algolia CLI command! We’ll see more of this in the next example
  • On read of JSON file it loads the entire JSON into memory.
    1 project | /r/learnpython | 19 Jul 2022
    You might consider using json-lines format (also known as newline-delimited JSON), in which each line is a separate JSON document so they can be loaded individually.
  • How to format it as json?
    1 project | /r/golang | 27 Jun 2022
    The format you're getting is known as Newline-Delimited JSON. Instead of trying to parse the whole input and pass that to the JSON Decoder, you can use something like bufio.Scanner to get and parse it line by line.
  • Arrow2 0.12.0 released - including almost complete support for Parquet
    2 projects | /r/rust | 5 Jun 2022
    This is in oposition to NDJSON, which allows to split records without deserializing JSON itself, via e.g. read_lines. fwiw CSV suffers from the same problem as JSON - generally not possible to break into records without deserializing. It is worse than NDJSON because the character \n may appear at any position within an item, thus forbidding read_lines.
  • Processing large JSON files in Python without running out of memory
    1 project | /r/Python | 18 Mar 2022
    I've always seen it referred to as ndjson
  • Speeding up Go's builtin JSON encoder up to 55% for large arrays of objects
    2 projects | news.ycombinator.com | 3 Mar 2022
    I think this would be fine, as long as the CSV layer was still parsable using the RFC 4180, then you could still use a normal CSV parser to parse the CSV layer and a normal JSON parser to parse the JSON layer. My worry with your example is that it is nether format, so it will need custom serialisation and deserialisation logic as it is essentially a bran new format.

    https://datatracker.ietf.org/doc/html/rfc4180

    If you’re looking for line-oriented JSON, another option would be ndjson: http://ndjson.org/

  • IETF should keep XMPP as IM standard, instead of Matrix
    7 projects | news.ycombinator.com | 16 Jan 2022

arrow2

Posts with mentions or reviews of arrow2. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-03.
  • Polars: Company Formation Announcement
    3 projects | news.ycombinator.com | 3 Aug 2023
    One of the interesting components of Polars that I've been watching is the use of the Apache Arrow memory format, which is a standard layout for data in memory that enables processing (querying, iterating, calculating, etc) in a language agnostic way, in particular without having to copy/convert it into the local object format first. This enables cross-language data access by mmaping or transferring a single buffer, with zero [de]serialization overhead.

    For some history, there's has been a bit of contention between the official arrow-rs implementation and the arrow2 implementation created by the polars team which includes some extra features that they find important. I think the current status is that everyone agrees that having two crates that implement the same standard is not ideal, and they are working to port any necessary features to the arrow-rs crate and plan on eventually switching to it and deprecating arrow2. But that's not easy.

    https://github.com/apache/arrow-rs/issues/1176

    https://github.com/jorgecarleitao/arrow2/pull/1476

  • Data Engineering with Rust
    5 projects | /r/rust | 9 May 2023
    https://github.com/jorgecarleitao/arrow2 https://github.com/apache/arrow-datafusion https://github.com/apache/arrow-ballista https://github.com/pola-rs/polars https://github.com/duckdb/duckdb
  • Polars[Query Engine/ DataFrame] 0.28.0 released :)
    3 projects | /r/rust | 29 Mar 2023
    Currently datafusion and polars aren't directly operable iirc because they use different underlying arrows implementations, but there seems to be work being done on that here https://github.com/jorgecarleitao/arrow2/issues/1429
  • Arrow2 0.15 has been released. Happy festivities everyone =)
    1 project | /r/rust | 18 Dec 2022
  • Rust is showing a lot of promise in the DataFrame / tabular data space
    9 projects | /r/rust | 4 Oct 2022
    [arrow2](https://github.com/jorgecarleitao/arrow2) and [parquet2](https://github.com/jorgecarleitao/parquet2) are great foundational libraries for and DataFrame libs in Rust.
  • Matano - Open source security lake built with Arrow2 + Rust
    2 projects | /r/rust | 3 Oct 2022
    [1] https://github.com/jorgecarleitao/arrow2
  • Polars 0.23.0 released
    3 projects | /r/rust | 4 Aug 2022
    In lockstep with arrow2's 0.13 release, we have published polars 0.23.0.
  • Arrow2 v0.13.0, now with support to read Apache ORC and COW semantics!
    1 project | /r/rust | 31 Jul 2022
  • ::lending-iterator — Lending/streaming Iterators on Stable Rust (and a pinch of HKT)
    3 projects | /r/rust | 20 Jul 2022
    This is so freaking life-saving! - we have been using StreamingIterator and FallibleStreamingIterator in libraries (arrow2 and parquet2) and the existing landscape is quite confusing for new users!
  • Mssql :(
    1 project | /r/rust | 9 Jun 2022
    arrow2 has support for mssql via ODBC (which microsoft has first class support to). Here are the integration tests we have (both read and write) against mssql specifically.

What are some alternatives?

When comparing ndjson.github.io and arrow2 you can also consider the following projects:

ndjson - Streaming line delimited json parser + serializer

polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust

flatten-tool - Tools for generating CSV and other flat versions of the structured data

datafusion - Apache DataFusion SQL Query Engine

miller - Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON

db-benchmark - reproducible benchmark of database-like ops

babashka - A Clojure babushka for the grey areas of Bash (native fast-starting Clojure scripting environment) [Moved to: https://github.com/babashka/babashka]

arrow-rs - Official Rust implementation of Apache Arrow

datasette - An open source multi-tool for exploring and publishing data

pyodide - Pyodide is a Python distribution for the browser and Node.js based on WebAssembly

grop - helper script for the `gron | grep | gron -u` workflow

explorer - Series (one-dimensional) and dataframes (two-dimensional) for fast and elegant data exploration in Elixir