ultrajson VS pysimdjson

Compare ultrajson vs pysimdjson and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
ultrajson pysimdjson
3 6
4,244 629
0.7% -
7.0 5.3
23 days ago 3 months ago
C Python
GNU General Public License v3.0 or later MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ultrajson

Posts with mentions or reviews of ultrajson. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-10-03.
  • Processing JSON 2.5x faster than simdjson with msgspec
    5 projects | /r/Python | 3 Oct 2022
    ujson
  • Benchmarking Python JSON serializers - json vs ujson vs orjson
    2 projects | dev.to | 25 May 2022
    For most cases, you would want to go with python’s standard json library which removes dependencies on other libraries. On other hand you could try out ujsonwhich is simple replacement for python’s json library. If you want more speed and also want dataclass, datetime, numpy, and UUID instances and you are ready to deal with more complex code, then you can try your hands on orjson
  • The fastest tool for querying large JSON files is written in Python (benchmark)
    16 projects | news.ycombinator.com | 12 Apr 2022
    I asked about this on the Github issue regarding these benchmarks as well.

    I'm curious as to why libraries like ultrajson[0] and orjson[1] weren't explored. They aren't command line tools, but neither is pandas right? Is it perhaps because the code required to implement the challenges is large enough that they are considered too inconvenient to use through the same way pandas was used (ie, `python -c "..."`)?

    [0] https://github.com/ultrajson/ultrajson

pysimdjson

Posts with mentions or reviews of pysimdjson. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-18.
  • Analyzing multi-gigabyte JSON files locally
    14 projects | news.ycombinator.com | 18 Mar 2023
  • I Use C When I Believe in Memory Safety
    5 projects | news.ycombinator.com | 5 Feb 2023
    Its magic function wrapping comes at a cost, trading ease of use for runtime performance. When you have a single C++ function to call that will run for a "long" time, pybind all the way. But pysimdjson tends to call a single function very quickly, and the overhead of a single function call is orders of magnitude slower than with cython when being explit with types and signatures. Wrap a class in pybind11 and cython and compare the stack trace between the two, and the difference is startling.

    Ex: https://github.com/TkTech/pysimdjson/issues/73

  • Processing JSON 2.5x faster than simdjson with msgspec
    5 projects | /r/Python | 3 Oct 2022
    simdjson
  • [package-find] lsp-bridge
    5 projects | /r/emacs | 23 May 2022
    You are aware of simdjson being available in python if you really need some json crunching, albeit json module in Python is implemented in C itself, so I don't think understand why do you think Python is slow there?
  • The fastest tool for querying large JSON files is written in Python (benchmark)
    16 projects | news.ycombinator.com | 12 Apr 2022
    json: 113.79130696877837 ms

    While `orjson`, is faster than `ujson`/`json` here, it's only ~6% faster (in this benchmark). `simdjson` and `msgspec` (my library, see https://jcristharif.com/msgspec/) are much faster due to them avoiding creating PyObjects for fields that are never used.

    If spyql's query engine can determine the fields it will access statically before processing, you might find using `msgspec` for JSON gives a nice speedup (it'll also type check the JSON if you know the type of each field). If this information isn't known though, you may find using `pysimdjson` (https://pysimdjson.tkte.ch/) gives an easy speed boost, as it should be more of a drop-in for `orjson`.

  • How I cut GTA Online loading times by 70%
    7 projects | /r/programming | 28 Feb 2021
    I don't think JSON is really the problem - parsing 10MB of JSON is not so slow. For example, using Python's json.load takes about 800ms for a 47MB file on my system, using something like simdjson cuts that down to ~70ms.

What are some alternatives?

When comparing ultrajson and pysimdjson you can also consider the following projects:

marshmallow - A lightweight library for converting complex objects to and from simple Python datatypes.

orjson - Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy

greenpass-covid19-qrcode-decoder - An easy tool for decoding Green Pass Covid-19 QrCode

cysimdjson - Very fast Python JSON parsing library

Fast JSON schema for Python - Fast JSON schema validator for Python.

python-rapidjson - Python wrapper around rapidjson

lupin is a Python JSON object mapper - Python document object mapper (load python object from JSON and vice-versa)

PyLD - JSON-LD processor written in Python

PyValico - Small python wrapper around https://github.com/rustless/valico

serpy - ridiculously fast object serialization