json-buffet VS json_benchmark

Compare json-buffet vs json_benchmark and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
json-buffet json_benchmark
2 2
0 20
- -
3.0 3.7
about 1 year ago 8 months ago
C++ Python
MIT License -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

json-buffet

Posts with mentions or reviews of json-buffet. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-18.
  • Analyzing multi-gigabyte JSON files locally
    14 projects | news.ycombinator.com | 18 Mar 2023
    And here's the code: https://github.com/multiversal-ventures/json-buffet

    The API isn't the best. I'd have preferred an iterator based solution as opposed to this callback based one. But we worked with what rapidjson gave us for the proof of concept.

  • Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
    20 projects | news.ycombinator.com | 6 Mar 2023
    Ha! Thanks to you, Today I found out how big those uncompressed JSON files really are (the data wasn't accessible to me, so i shared the tool with my colleague and he was the one who ran the queries on his laptop): https://www.dolthub.com/blog/2022-09-02-a-trillion-prices/ .

    And yep, it was more or less they way you did with ijson. I found ijson just a day after I finished the prototype. Rapidjson would probably be faster. Especially after enabling SIMD. But the indexing was a one time thing.

    We have open sourced the codebase. Here's the link: https://github.com/multiversal-ventures/json-buffet . Since this was a quick and dirty prototype, comments were sparse. I have updated the Readme, and added a sample json-fetcher. Hope this is more useful for you.

    Another unwritten TODO was to nudge the data providers towards a more streaming friendly compression formats - and then just create an index to fetch the data directly from their compressed archives. That would have saved everyone a LOT of $$$.

json_benchmark

Posts with mentions or reviews of json_benchmark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-06.
  • Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
    20 projects | news.ycombinator.com | 6 Mar 2023
    If you're primarily targeting Python as an application layer, you may also want to check out my msgspec library[1]. All the perf benefits of e.g. yyjson, but with schema validation like pydantic. It regularly benchmarks[2] as the fastest JSON library for Python. Much of the overhead of decoding JSON -> Python comes from the python layer, and msgspec employs every trick I know to minimize that overhead.

    [1]: https://github.com/jcrist/msgspec

    [2]: https://github.com/TkTech/json_benchmark

  • Sunday Daily Thread: What's everyone working on this week?
    7 projects | /r/Python | 17 Apr 2022
    - Adding nvme drive support to SMARTie, https://github.com/tktech/smartie, which is a pure-python cross-platform library for getting disk information like serial number, SMART attributes (like disk temperature) - json_benchmark, https://github.com/tktech/json_benchmark, which is a new benchmark and correctness test for the more modern Python JSON libraries - py_yyjson, https://github.com/tktech/py_yyjson, which is still a WIP and provides Python bindings to the yyjson library, which offers comparable speed to simdjson but more flexibility when parsing (comments, arbitrary sized numbers, Inf/Nan, etc) - And some fixes to https://github.com/TkTech/humanmark, which is a markdown library used to edit the README.md in json_benchmark above.

What are some alternatives?

When comparing json-buffet and json_benchmark you can also consider the following projects:

japronto - Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.

semi_index - Implementation of the JSON semi-index described in the paper "Semi-Indexing Semi-Structured Data in Tiny Space"

data-analysis

is2 - embedded RESTy http(s) server library from Edgio

search-dw - search-dw is a Python utility to automate "search and download" via the command line. It might be useful if you need to download the results of a Google search for a certain type of topic at the same time

reddit_mining

Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing

jsplit - A Go program to split large JSON files into many jsonl files

ClickHouse - ClickHouse® is a free analytics DBMS for big data

smartie - Pure-python ATA/SATA/ATAPI/SCSI and disk enumeration library for Linux/Windows/OS X.