typedload VS json-buffet

Compare typedload vs json-buffet and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
typedload json-buffet
5 2
254 0
- -
8.1 3.0
7 days ago about 1 year ago
Python C++
GNU General Public License v3.0 or later MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

typedload

Posts with mentions or reviews of typedload. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-06.
  • Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
    20 projects | news.ycombinator.com | 6 Mar 2023
    Author of typedload here!

    FastAPI relies on (not so fast) pydantic, which is one of the slowest libraries in that category.

    Don't expect to find such benchmarks on the pydantic documentation itself, but the competing libraries will have them.

    [0] https://ltworf.github.io/typedload/

  • Pydantic vs Protobuf vs Namedtuples vs Dataclasses
    4 projects | /r/Python | 25 Feb 2023
    I wrote typedload, which is significantly faster than pydantic. Just uses normal dataclasses/attrs/NamedTuple, has a better API and is pure Python!
  • Informatica serve a qualcosa?
    1 project | /r/Universitaly | 4 Feb 2023
  • Show HN: Python framework is faster than Golang Fiber
    19 projects | news.ycombinator.com | 10 Jan 2023
    I read all the perftests in the repo. I think they nearly all parse a structure that contains a repetition of the same or similar thing a couple hundred thousand times times and the timing function returns the min and max of 5 attempts. I just picked one example for posting.

    Not a Python expert, but could the Pydantic tests be possibly not realistic and/or misleading because they are using kwargs in __init__ [1] to parse the object instead of calling the parse_obj class method [2]? According to some PEPs [3], isn't Python creating a new dictionary for that parameter which would be included in the timing? That would be unfortunate if that accounted for the difference.

    Something else I think about is if a performance test doesn't produce a side effect that is checked, a smart compiler or runtime could optimize the whole benchmark away. Or too easy for the CPU to do branch prediction, etc. I think I recall that happening to me in Java in the past, but probably not happened here in Python.

    [1] https://github.com/ltworf/typedload/blob/37c72837e0a8fd5f350...

    [2] https://docs.pydantic.dev/usage/models/#helper-functions

    [3] https://peps.python.org/pep-0692/

json-buffet

Posts with mentions or reviews of json-buffet. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-18.
  • Analyzing multi-gigabyte JSON files locally
    14 projects | news.ycombinator.com | 18 Mar 2023
    And here's the code: https://github.com/multiversal-ventures/json-buffet

    The API isn't the best. I'd have preferred an iterator based solution as opposed to this callback based one. But we worked with what rapidjson gave us for the proof of concept.

  • Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
    20 projects | news.ycombinator.com | 6 Mar 2023
    Ha! Thanks to you, Today I found out how big those uncompressed JSON files really are (the data wasn't accessible to me, so i shared the tool with my colleague and he was the one who ran the queries on his laptop): https://www.dolthub.com/blog/2022-09-02-a-trillion-prices/ .

    And yep, it was more or less they way you did with ijson. I found ijson just a day after I finished the prototype. Rapidjson would probably be faster. Especially after enabling SIMD. But the indexing was a one time thing.

    We have open sourced the codebase. Here's the link: https://github.com/multiversal-ventures/json-buffet . Since this was a quick and dirty prototype, comments were sparse. I have updated the Readme, and added a sample json-fetcher. Hope this is more useful for you.

    Another unwritten TODO was to nudge the data providers towards a more streaming friendly compression formats - and then just create an index to fetch the data directly from their compressed archives. That would have saved everyone a LOT of $$$.

What are some alternatives?

When comparing typedload and json-buffet you can also consider the following projects:

codon - A high-performance, zero-overhead, extensible Python compiler using LLVM

japronto - Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.

ustore - Multi-Modal Database replacing MongoDB, Neo4J, and Elastic with 1 faster ACID solution, with NetworkX and Pandas interfaces, and bindings for C 99, C++ 17, Python 3, Java, GoLang 🗄️

semi_index - Implementation of the JSON semi-index described in the paper "Semi-Indexing Semi-Structured Data in Tiny Space"

pydantic-core - Core validation logic for pydantic written in rust

is2 - embedded RESTy http(s) server library from Edgio

peps - Python Enhancement Proposals

reddit_mining

msgspec - A fast serialization and validation library, with builtin support for JSON, MessagePack, YAML, and TOML

json_benchmark - Python JSON benchmarking and "correctness".

koda-validate - Typesafe, Composable Validation

Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing