The fastest tool for querying large JSON files is written in Python (benchmark)

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • SonarQube - Static code analysis for 29 languages.
  • JetBrains - Developer Ecosystem Survey 2022
  • Scout APM - Less time debugging, more time building
  • simdjson

    Parsing gigabytes of JSON per second

    Daniel Lemire’s simdjson probably belongs in this discussion and I would be surprised if it is not the fastest tool by some margin.

    https://github.com/simdjson/simdjson

  • ojg

    Optimized JSON for Go

    For me OjG (https://github.com/ohler55/ojg) has been great. I regularly use it on files that can not be loaded into memory. The best JSON file format for multiple record is one JSON document per record all in the same file. OjG doesn't care if they are on different lines. It is fast (https://github.com/ohler55/compare-go-json) and uses a fairly complete JSONPath implementation for searches. Similar to jq but using JSONPath instead of a proprietary query language.

    I am biased though as I wrote OjG to handle what other tools were not able to do.

  • SonarQube

    Static code analysis for 29 languages.. Your projects are multi-language. So is SonarQube analysis. Find Bugs, Vulnerabilities, Security Hotspots, and Code Smells so you can release quality code every time. Get started analyzing your projects today for free.

  • compare-go-json

    A comparison of several go JSON packages.

    For me OjG (https://github.com/ohler55/ojg) has been great. I regularly use it on files that can not be loaded into memory. The best JSON file format for multiple record is one JSON document per record all in the same file. OjG doesn't care if they are on different lines. It is fast (https://github.com/ohler55/compare-go-json) and uses a fairly complete JSONPath implementation for searches. Similar to jq but using JSONPath instead of a proprietary query language.

    I am biased though as I wrote OjG to handle what other tools were not able to do.

  • ultrajson

    Ultra fast JSON decoder and encoder written in C with Python bindings

    I asked about this on the Github issue regarding these benchmarks as well.

    I'm curious as to why libraries like ultrajson[0] and orjson[1] weren't explored. They aren't command line tools, but neither is pandas right? Is it perhaps because the code required to implement the challenges is large enough that they are considered too inconvenient to use through the same way pandas was used (ie, `python -c "..."`)?

    [0] https://github.com/ultrajson/ultrajson

  • orjson

    Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy

  • bert

    TensorFlow code and pre-trained models for BERT

    > resulting in large programs with lots of boilerplate

    That was what I was trying to say when I said "the code required to implement the challenges is large enough that they are considered too inconvenient to use". This makes sense to me.

    Thank you for this benchmark! I'll probably switch to spyql now from jq.

    > So, orjson is part of the reason why a python-based tool outperforms tools written in C, Go, etc and deserves credit.

    Yes, I definitely think this is worth mentioning upfront in the future, since, IIUC, orison's core uses Rust (the serde library, specifically). The initial title gave me the impression that a pure-Python json parsing-and-querying solution was the fastest out there, which I find misleading.

    A parallel I think is helpful is saying something like "the fastest BERT implementation is written Python[0]". While the linked implementation is written in Python, it offloads the performance critical parts to C/C++ through TensorFlow.

    [0] https://github.com/google-research/bert

  • catj

    Displays JSON files in a flat format.

    My main problem with jq was finding the paths to the nodes, so I wrote catj (https://github.com/soheilpro/catj) to help with that.

  • JetBrains

    Developer Ecosystem Survey 2022. Take part in the Developer Ecosystem Survey 2022 by JetBrains and get a chance to win a Macbook, a Nvidia graphics card, or other prizes. We’ll create an infographic full of stats, and you’ll get personalized results so you can compare yourself with other developers.

  • db-benchmark

    reproducible benchmark of database-like ops

  • pysimdjson

    Python bindings for the simdjson project.

    json: 113.79130696877837 ms

    While `orjson`, is faster than `ujson`/`json` here, it's only ~6% faster (in this benchmark). `simdjson` and `msgspec` (my library, see https://jcristharif.com/msgspec/) are much faster due to them avoiding creating PyObjects for fields that are never used.

    If spyql's query engine can determine the fields it will access statically before processing, you might find using `msgspec` for JSON gives a nice speedup (it'll also type check the JSON if you know the type of each field). If this information isn't known though, you may find using `pysimdjson` (https://pysimdjson.tkte.ch/) gives an easy speed boost, as it should be more of a drop-in for `orjson`.

  • msgspec

    A fast and friendly JSON/MessagePack library, with optional schema validation

    json: 113.79130696877837 ms

    While `orjson`, is faster than `ujson`/`json` here, it's only ~6% faster (in this benchmark). `simdjson` and `msgspec` (my library, see https://jcristharif.com/msgspec/) are much faster due to them avoiding creating PyObjects for fields that are never used.

    If spyql's query engine can determine the fields it will access statically before processing, you might find using `msgspec` for JSON gives a nice speedup (it'll also type check the JSON if you know the type of each field). If this information isn't known though, you may find using `pysimdjson` (https://pysimdjson.tkte.ch/) gives an easy speed boost, as it should be more of a drop-in for `orjson`.

  • datasette

    An open source multi-tool for exploring and publishing data

    "Datasette" (from Django co-creator) can take tabular data (SQLite, CSV, JSON, etc) and generate a REST/GraphQL API with visualization tools from it:

    https://github.com/simonw/datasette

    From the same author, "sqlite-utils" can take tabular data and create SQLite table definitions and rows from them:

    https://github.com/simonw/sqlite-utils

    "Pipe JSON (or CSV or TSV) directly into a new SQLite database file, automatically creating a table with the appropriate schema"

      > * What sort of JSON "meta-formats" are the most important/common for you? E.g. in a file you could have object-per-line, object-of-arrays, array-of-objects, or in an SQL context you could have object-per-row or object-of-arrays-as-table, etc). I'd love to hear about others that are important to you.

  • sqlite-utils

    Python CLI utility and library for manipulating SQLite databases

    "Datasette" (from Django co-creator) can take tabular data (SQLite, CSV, JSON, etc) and generate a REST/GraphQL API with visualization tools from it:

    https://github.com/simonw/datasette

    From the same author, "sqlite-utils" can take tabular data and create SQLite table definitions and rows from them:

    https://github.com/simonw/sqlite-utils

    "Pipe JSON (or CSV or TSV) directly into a new SQLite database file, automatically creating a table with the appropriate schema"

      > * What sort of JSON "meta-formats" are the most important/common for you? E.g. in a file you could have object-per-line, object-of-arrays, array-of-objects, or in an SQL context you could have object-per-row or object-of-arrays-as-table, etc). I'd love to hear about others that are important to you.

  • ojc

    Optimized JSON in C

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts