ndjson-spec VS TileDB

Compare ndjson-spec vs TileDB and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
ndjson-spec TileDB
6 14
633 1,771
2.8% 1.4%
0.0 9.7
over 1 year ago 7 days ago
C++
- MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ndjson-spec

Posts with mentions or reviews of ndjson-spec. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-24.
  • Documentation for the JSON Lines text file format
    4 projects | news.ycombinator.com | 24 Feb 2024
    What’s the difference between terminators and separators here? The ndjson spec [0] doesn’t say anything like that, and it seems that ndjson and jsonlines are identical in what documents they accept.

    [0]: https://github.com/ndjson/ndjson-spec

  • Does anyone use JSON files as a database? Best practises? Or should I use a "real" database?
    3 projects | /r/node | 29 Jan 2023
    It's possible to work around this and still use a JSON-ish format. For example, my @broofa/persistentmap project provides an ES6 Map API backed by a persistent append-only file in NDJSON format. (Note: NDJSON isn't really a spec so much as a "convention"). But, honestly,
  • Why isn’t there a decent file format for tabular data?
    13 projects | news.ycombinator.com | 3 May 2022
    ndjson is actually a really pragmatic choice here that should not be overlooked.

    Tabular formats break down when the data stops being tabular. This comes up a lot. People love spread sheets as editing tools but they then end up doing things like putting comma separated values in a cell. I've also seen business people use empty cells to indicate hierarchical 'inheritance". An alternate interpretation of that is that that data has some kind of hierarchy and isn't really row based. People just shoehorn all sorts of stuff into spreadsheets because they are there.

    With ndjson, every line is a json object. Every cell is a named field. If you need multiple values, you can use arrays for the fields. Json has actual types (int, float, strings, boolean). So you can have both hierarchical and multivalued data in a row. The case where all the fields are simple primitives is just the simple case. It has an actual specification too: https://github.com/ndjson/ndjson-spec. I like it because I can stream process it and represent arbitrarily complex objects/documents instead of having to flatten it into columns. The parsing overhead makes it more expensive to use than tsv though. The file size is fine if you use e.g. gzip compression. It compresses really well generally.

    But I also use either tab separated values quite often for simpler data. I mainly like it because google spread sheets provides that as an export option and is actually a great editor for tabular data that I can just give to non technical people.

    Both file formats can be easily manipulated with command line tools (jq, csvkit, sed, etc.). Both can be processed using mature parsers in a wide range of languages. If you really want, you can edit them with simple text editors, though you probably should be careful with that. Tools like bat know how to format and highlight these files as well. Etc. Tools like that are important because you can use them and script them together rather than reinventing wheels.

    Formats like parquet are cumbersome mainly because none of the tools I mention support it. No editors. Not a lot of command line tools. No formatting tools. If you want to inspect the data, you pretty much have to write a program to do it. I guess this would be fixable but people seem to be not really interested in doing that work. Parquet becomes nice when you need to process data at scale and in any case use a lot of specialized tooling and infrastructure. Not for everyone in other words.

    Character encoding is not an issue with either tsv or ndjson if you simply use UTF-8, always. I see no good technical reason why you should use anything else. Anything else should be treated as a bug or legacy. Of course a lot of data has encoding issues regardless. Shit in, shit out basically. Fix it at the source, if you can.

    The last point is actually key because all of the issues with e.g. csv usually start with people just using really crappy tools to produce source data. Switching to a different file format won't fix these issues since you still deal with the same crappy tools that of course do not support this file format. Anything else you could just fix to not suck to begin with. And if you do, it stops being an issue. The problem is when you can't.

    Nothing wrong with tsv if you use UTF-8 and a some nice framework that generates properly escaped values and does all the right things. The worst you can say about it is that there are a bit too many choices here and people tend to improvise their own crappy data generation tools with escaping bugs and other issues. Most of the pain is self inflicted. The reason csv/tsv are popular is that you don't need a lot of frameworks / tools. But of course the flipside is that DYI leads to people introducing all sorts of unnecessary issues. Try not to do that.

  • Has UML died without anyone noticing?
    4 projects | /r/programming | 25 Apr 2021
    Newline-separated JSON is pretty close. It's not exactly a "standard" standard like something the IETF defines, but it's pretty easy to work with anyhow.
  • json to csv on google cloud fusion
    1 project | /r/googlecloud | 16 Feb 2021

TileDB

Posts with mentions or reviews of TileDB. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-01.
  • Ask HN: Who is hiring? (May 2024)
    8 projects | news.ycombinator.com | 1 May 2024
    TileDB, Inc. | Full-Time | REMOTE | USA, Greece/EU | [https://tiledb.com](https://tiledb.com/)

    TileDB has recently announced a $34 million Series B fund-raise and is actively hiring for engineers across a range of roles (SRE, backend/distributed systems, database internals, and more). You will have the opportunity to work on innovative technology that creates impact for challenging problems in genomics, geospatial, machine learning, distributed systems, and many other areas.

    TileDB Cloud is the modern database, allowing developers and scientists to capture, analyze, and share any data with any tool. We build on a broad foundation of open source, maintaining the TileDB storage engine, libraries for genomics (single-cell and population), geospatial (raster, point clouds, and more), a TileDB visualization engine extending Babylon.js, and much more ([github.com/TileDB-Inc/TileDB](http://github.com/TileDB-Inc/TileDB))

    With TileDB, all data — tables, genomics, images, videos, location, time-series — is captured as multi-dimensional arrays. To supercharge this data, TileDB Cloud implements a serverless infrastructure delivering query execution, access control, data and code sharing, and distributed computing at global scale — eliminating cluster management, minimizing TCO, and promoting scientific collaboration and reproducibility.

    Website: [https://tiledb.com](https://tiledb.com/) | GitHub: https://github.com/TileDB-Inc/TileDB | Blog: https://tiledb.com/blog

    We are actively hiring for several roles including:

    - Site Reliability Engineer (k8s, Terraform, automation, Prometheus, CloudWatch, GitOps; Golang, Python)

  • Ask HN: Who is hiring? (September 2023)
    14 projects | news.ycombinator.com | 1 Sep 2023
    - single cell genomics: in collaboration with the Chan-Zuckerberg Initiative, we recently released TileDB-SOMA for single cell data, with APIs for both Python and R built around a common storage specification: https://tiledb.com/blog/tiledb-101-single-cell

    With TileDB, all data — tables, genomics, images, videos, location, time-series — across multiple domains is captured as multi-dimensional arrays. TileDB Cloud implements a totally serverless infrastructure and delivers access control, easier data and code sharing and distributed computing at global scale, eliminating cluster management, minimizing TCO and promoting scientific collaboration and reproducibility.

    Website: https://tiledb.com

    GitHub: https://github.com/TileDB-Inc/TileDB

  • Why TileDB as a Vector Database
    2 projects | news.ycombinator.com | 2 Aug 2023
    Stavros from TileDB here (Founder and CEO). I thought of requesting some feedback from the community on this blog. It was only natural for a multi-dimensional array database like TileDB to offer vector (i.e., 1D array) search capabilities. But the team managed to do it very well and the results surprised us. We are just getting started in this domain and a lot of new algorithms and features are coming up, but the sooner we get feedback the better.

    TileDB-Vector-Search Github repo: https://github.com/TileDB-Inc/TileDB-Vector-Search

    TileDB-Embedded (core array engine) Github repo: https://github.com/TileDB-Inc/TileDB

    TileDB 101: Vector Search (blog to get kickstarted): https://tiledb.com/blog/tiledb-101-vector-search/

  • Ask HN: Who is hiring? (August 2023)
    13 projects | news.ycombinator.com | 1 Aug 2023
    TileDB, Inc. | Full-Time | REMOTE | USA | Greece | https://tiledb.com

    TileDB is the database for complex data, allowing data scientists, researchers, and analysts to access, analyze, and share any data with any tool at global scale. We have just launched a vector search library leveraging TileDB and TileDB Cloud for powerful local search and seamless scaling to multi-modal organizational datasets and batched computation: https://tiledb.com/blog/why-tiledb-as-a-vector-database

    With TileDB, all data — tables, genomics, images, videos, location, time-series — across multiple domains is captured as multi-dimensional arrays. Our vector search library and other offerings are designed to empower these datasets with extreme interoperability via numerous APIs and tool integrations across the data science ecosystem, eliminating the hassles and inefficiencies of data conversion. TileDB Cloud implements a totally serverless infrastructure and delivers access control, easier data and code sharing and distributed computing at global scale, eliminating cluster management, minimizing TCO and promoting scientific collaboration and reproducibility.

  • Ask HN: Who is hiring? (December 2022)
    14 projects | news.ycombinator.com | 1 Dec 2022
    TileDB, Inc. | Full-Time | REMOTE | USA | Greece | https://tiledb.com

    TileDB transforms the lives of analytics professionals and data scientists with a universal database, allowing them to access, analyze, and share any data with any tool at global scale. TileDB unifies the way we think about data, delivering superior performance and foundational data management capabilities. All data — tables, genomics, images, videos, location, time-series — across multiple domains is captured as multi-dimensional arrays. TileDB offers extreme interoperability via numerous APIs and tool integrations across the data science ecosystem, eliminating the hassles and inefficiencies of data conversion. TileDB Cloud implements a totally serverless infrastructure and delivers access control, easier data and code sharing and distributed computing at global scale, eliminating cluster management, minimizing TCO and promoting scientific collaboration and reproducibility.

    TileDB, Inc. was spun out of MIT and Intel Labs in May 2017 and is backed by Two Bear Capital, Nexus Venture Partners, Uncorrelated Ventures, Intel Capital and Big Pi.

    Recent HN article: https://news.ycombinator.com/item?id=23896131

    Website: https://tiledb.com

    GitHub: https://github.com/TileDB-Inc/TileDB

    Docs: https://docs.tiledb.com

    Blog: https://tiledb.com/blog

    Our headquarters are located in Cambridge, MA and we have a subsidiary in Athens, Greece. We offer the ability to work remotely. If you are located outside of the USA and Greece we have options to accommodate this, don't hesitate to apply!

    We have several open positions aimed at increasing TileDB’s feature set, growth and adoption. You will have the opportunity to work on innovative technology that creates impact on challenging and exciting problems in Genomics, Geospatial, Time Series, and more. Immediate features on the roadmap for TileDB Cloud include, advanced distributed computations, advanced computation pushdown, improved multi-cloud deployments and more.

    We are actively seeking:

    - Senior Golang Engineer

    - Senior Python Engineer

    - Site Reliability Engineer

    - React Frontend Engineer

    Apply today at https://tiledb.workable.com !

  • Historical weather data API for machine learning, free for non-commercial
    1 project | news.ycombinator.com | 6 Jul 2022
    Interesting. Have you come across TileDB before?

    https://tiledb.com/

  • Why isn’t there a decent file format for tabular data?
    13 projects | news.ycombinator.com | 3 May 2022
    Hi folks, Stavros from TileDB here. Here are my two cents on tabular data. TileDB (Embedded) is a very serious competitor to Parquet, the only other sane choice IMO when it comes to storing large volumes of tabular data (especially when combined with Arrow). Admittedly, we haven’t been advertising TileDB’s tabular capabilities, but that’s only because we were busy with much more challenging applications, such as genomics (population and single-cell), LiDAR, imaging and other very convoluted (from a data format perspective) domains.

    Similar to Parquet:

    * TileDB is columnar and comes with a lot of compressors, checksum and encryption filters.

    * TileDB is built in C++ with multi-threading and vectorization in mind

    * TileDB integrates with Arrow, using zero-copy techniques

    * TileDB has numerous optimized APIs (C, C++, C#, Python, R, Java, Go)

    * TileDB pushes compute down to storage, similar to what Arrow does

    Better than Parquet:

    * TileDB is multi-dimensional, allowing rapid multi-column conditions

    * TileDB builds versioning and time-traveling into the format (no need for Delta Lake, Iceberg, etc)

    * TileDB allows for lock-free parallel writes / parallel reads with ACID properties (no need for Delta Lake, Iceberg, etc)

    * TileDB can handle more than tables, for example n-dimensional dense arrays (e.g., for imaging, video, etc)

    Useful links:

    * Github repo (https://github.com/TileDB-Inc/TileDB)

    * TileDB Embedded overview (https://tiledb.com/products/tiledb-embedded/)

    * Docs (https://docs.tiledb.com/)

    * Webinar on why arrays as a universal data model (https://tiledb.com/blog/why-arrays-as-a-universal-data-model)

    Happy to hear everyone’s thoughts.

  • Genomics data management reimagined. Analyze and share enormous variant datasets with TileDB Cloud.
    1 project | /r/u_tiledb | 28 Jan 2022
  • TileDB VS Activeloop hub - a user suggested alternative
    2 projects | 20 Oct 2021
  • Seeking options for multidimensional data storage
    1 project | /r/Database | 12 Aug 2021
    It could be worth checking out TileDB: https://github.com/TileDB-Inc/TileDB The entire system, down to the data format itself, is optimized around storing multi-dimensional arrays. It also supports timestamps and real numbers as dimensions, which could be handy given your example data. [Full disclosure: I currently work for TileDB.]

What are some alternatives?

When comparing ndjson-spec and TileDB you can also consider the following projects:

hsv5 - HTML5 Based Alternative to CSV, TSV, JSONL, etc

ClickHouse - ClickHouse® is a free analytics DBMS for big data

tplant - Typescript to plantuml

RocksDB - A library that provides an embeddable, persistent key-value store for fast storage.

rson - Rust Object Notation

MongoDB C Driver - The Official MongoDB driver for C language

odiff - The fastest pixel-by-pixel image visual difference tool in the world.

LevelDB - LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

parquet-wasm - Rust-based WebAssembly bindings to read and write Apache Parquet data

libmdbx - One of the fastest embeddable key-value ACID database without WAL. libmdbx surpasses the legendary LMDB in terms of reliability, features and performance.

node-skeleton - Starter skeleton for Node applications

MongoDB Libbson