json-buffet
msgspec
json-buffet | msgspec | |
---|---|---|
2 | 31 | |
0 | 1,877 | |
- | - | |
3.0 | 8.6 | |
about 1 year ago | about 1 month ago | |
C++ | Python | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
json-buffet
-
Analyzing multi-gigabyte JSON files locally
And here's the code: https://github.com/multiversal-ventures/json-buffet
The API isn't the best. I'd have preferred an iterator based solution as opposed to this callback based one. But we worked with what rapidjson gave us for the proof of concept.
-
Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
Ha! Thanks to you, Today I found out how big those uncompressed JSON files really are (the data wasn't accessible to me, so i shared the tool with my colleague and he was the one who ran the queries on his laptop): https://www.dolthub.com/blog/2022-09-02-a-trillion-prices/ .
And yep, it was more or less they way you did with ijson. I found ijson just a day after I finished the prototype. Rapidjson would probably be faster. Especially after enabling SIMD. But the indexing was a one time thing.
We have open sourced the codebase. Here's the link: https://github.com/multiversal-ventures/json-buffet . Since this was a quick and dirty prototype, comments were sparse. I have updated the Readme, and added a sample json-fetcher. Hope this is more useful for you.
Another unwritten TODO was to nudge the data providers towards a more streaming friendly compression formats - and then just create an index to fetch the data directly from their compressed archives. That would have saved everyone a LOT of $$$.
msgspec
- Htmx, Rust and Shuttle: A New Rapid Prototyping Stack
-
Litestar 2.0
Full support for validation and serialisation of attrs classes and msgspec Structs. Where previously only Pydantic models and types where supported, you can now mix and match any of these three libraries. In addition to this, adding support for another modelling library has been greatly simplified with the new plugin architecture
-
FastAPI 0.100.0:Release Notes
> Maybe it was very slow before
That is at least partly the case. I maintain msgspec[1], another Python JSON validation library. Pydantic V1 was ~100x slower at encoding/decoding/validating JSON than msgspec, which was more a testament to Pydantic's performance issues than msgspec's speed. Pydantic V2 is definitely faster than V1, but it's still ~10x slower than msgspec, and up to 2x slower than other pure-python implementations like mashumaro.
Recent benchmark here: https://gist.github.com/jcrist/d62f450594164d284fbea957fd48b...
[1]: https://github.com/jcrist/msgspec
-
Pydantic 2.0
While it's definitely much faster than pydantic V1 (which is a huge accomplishment!), it's still not exactly what I'd call "fast".
I maintain msgspec (https://github.com/jcrist/msgspec), a serialization/validation library which provides similar functionality to pydantic. Recent benchmarks of pydantic V2 against msgspec show msgspec is still 15-30x faster at JSON encoding, and 6-15x faster at JSON decoding/validating.
Benchmark (and conversation with Samuel) here: https://gist.github.com/jcrist/d62f450594164d284fbea957fd48b...
This is not to diminish the work of the pydantic team! For many users pydantic will be more than fast enough, and is definitely a more feature-filled tool. It's a good library, and people will be happy using it! But pydantic is not the only tool in this space, and rubbing some rust on it doesn't necessarily make it "fast".
-
Need help developing a high performance Redis ORM for Python
https://github.com/jcrist/msgspec so I am using this instead of Pydantic.
-
Blog post: Writing Python like it’s Rust
Another thing: why pyserde rather than stuff like msgspec? https://github.com/jcrist/msgspec
- Show HN: Msgspec, a fast serialization/validation library for Python
-
[Guide] A Tour Through the Python Framework Galaxy: Discovering the Stars
Try msgspec | Maat | turbo for fast serialization and validation
-
Pydantic V2 rewritten in Rust is 5-50x faster than Pydantic V1
Congratulations to the team, Pydantic is an amazing library.
If you find JSON serialization/deserialization a bottleneck, another interesting library (with much less features) for Python is msgspec: https://github.com/jcrist/msgspec
-
Starlite updates March '22 | 2.0 is coming
This feature is yet to be released, but it will allow you to seamlessly use data modelled with for example Pydantic, SQLAlchemy, msgspec or dataclasses in your route handlers, without the need for an intermediary model; The conversion will be handled by the specific DTO "backend" implementation. This new paradigm also makes it trivial to add support for any such modelling library, by simply implementing an appropriate backend.
What are some alternatives?
japronto - Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.
pydantic - Data validation using Python type hints
semi_index - Implementation of the JSON semi-index described in the paper "Semi-Indexing Semi-Structured Data in Tiny Space"
orjson - Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy
is2 - embedded RESTy http(s) server library from Edgio
fastapi - FastAPI framework, high performance, easy to learn, fast to code, ready for production
reddit_mining
mashumaro - Fast and well tested serialization library
json_benchmark - Python JSON benchmarking and "correctness".
MessagePack - MessagePack serializer implementation for Java / msgpack.org[Java]
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
marshmallow - A lightweight library for converting complex objects to and from simple Python datatypes.