jq-zsh-plugin VS pysimdjson

Compare jq-zsh-plugin vs pysimdjson and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
jq-zsh-plugin pysimdjson
4 6
298 629
- -
6.0 5.3
25 days ago 3 months ago
Shell Python
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

jq-zsh-plugin

Posts with mentions or reviews of jq-zsh-plugin. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-11-07.
  • Interactive Examples for Learning Jq
    13 projects | news.ycombinator.com | 7 Nov 2023
  • Analyzing multi-gigabyte JSON files locally
    14 projects | news.ycombinator.com | 18 Mar 2023
    https://github.com/reegnz/jq-zsh-plugin

    I find that for big datasets choosing the right format is crucial. Using json-lines format + some shell filtering (eg. head, tail to limit the range, egrep or ripgrep for the more trivial filtering) to reduce the dataset to a couple of megabytes, then use that jq-repl of mine to iterate fast on the final jq expression.

    I found that the REPL form factor works really well when you don't exactly know what you're digging for.

pysimdjson

Posts with mentions or reviews of pysimdjson. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-18.
  • Analyzing multi-gigabyte JSON files locally
    14 projects | news.ycombinator.com | 18 Mar 2023
  • I Use C When I Believe in Memory Safety
    5 projects | news.ycombinator.com | 5 Feb 2023
    Its magic function wrapping comes at a cost, trading ease of use for runtime performance. When you have a single C++ function to call that will run for a "long" time, pybind all the way. But pysimdjson tends to call a single function very quickly, and the overhead of a single function call is orders of magnitude slower than with cython when being explit with types and signatures. Wrap a class in pybind11 and cython and compare the stack trace between the two, and the difference is startling.

    Ex: https://github.com/TkTech/pysimdjson/issues/73

  • Processing JSON 2.5x faster than simdjson with msgspec
    5 projects | /r/Python | 3 Oct 2022
    simdjson
  • [package-find] lsp-bridge
    5 projects | /r/emacs | 23 May 2022
    You are aware of simdjson being available in python if you really need some json crunching, albeit json module in Python is implemented in C itself, so I don't think understand why do you think Python is slow there?
  • The fastest tool for querying large JSON files is written in Python (benchmark)
    16 projects | news.ycombinator.com | 12 Apr 2022
    json: 113.79130696877837 ms

    While `orjson`, is faster than `ujson`/`json` here, it's only ~6% faster (in this benchmark). `simdjson` and `msgspec` (my library, see https://jcristharif.com/msgspec/) are much faster due to them avoiding creating PyObjects for fields that are never used.

    If spyql's query engine can determine the fields it will access statically before processing, you might find using `msgspec` for JSON gives a nice speedup (it'll also type check the JSON if you know the type of each field). If this information isn't known though, you may find using `pysimdjson` (https://pysimdjson.tkte.ch/) gives an easy speed boost, as it should be more of a drop-in for `orjson`.

  • How I cut GTA Online loading times by 70%
    7 projects | /r/programming | 28 Feb 2021
    I don't think JSON is really the problem - parsing 10MB of JSON is not so slow. For example, using Python's json.load takes about 800ms for a 47MB file on my system, using something like simdjson cuts that down to ~70ms.

What are some alternatives?

When comparing jq-zsh-plugin and pysimdjson you can also consider the following projects:

semi_index - Implementation of the JSON semi-index described in the paper "Semi-Indexing Semi-Structured Data in Tiny Space"

orjson - Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy

z-a-readurl - 🌀 An annex delivers the capability to automatically download the newest version of a file to which URL is hosted on a webpage

cysimdjson - Very fast Python JSON parsing library

json-buffet

ultrajson - Ultra fast JSON decoder and encoder written in C with Python bindings

lnav - Log file navigator

Fast JSON schema for Python - Fast JSON schema validator for Python.

reddit_mining

lupin is a Python JSON object mapper - Python document object mapper (load python object from JSON and vice-versa)

ClickHouse - ClickHouse® is a free analytics DBMS for big data

PyValico - Small python wrapper around https://github.com/rustless/valico