json-buffet
japronto
json-buffet | japronto | |
---|---|---|
2 | 3 | |
0 | 8,618 | |
- | - | |
3.0 | 0.0 | |
over 1 year ago | about 1 year ago | |
C++ | C | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
json-buffet
-
Analyzing multi-gigabyte JSON files locally
And here's the code: https://github.com/multiversal-ventures/json-buffet
The API isn't the best. I'd have preferred an iterator based solution as opposed to this callback based one. But we worked with what rapidjson gave us for the proof of concept.
-
Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
Ha! Thanks to you, Today I found out how big those uncompressed JSON files really are (the data wasn't accessible to me, so i shared the tool with my colleague and he was the one who ran the queries on his laptop): https://www.dolthub.com/blog/2022-09-02-a-trillion-prices/ .
And yep, it was more or less they way you did with ijson. I found ijson just a day after I finished the prototype. Rapidjson would probably be faster. Especially after enabling SIMD. But the indexing was a one time thing.
We have open sourced the codebase. Here's the link: https://github.com/multiversal-ventures/json-buffet . Since this was a quick and dirty prototype, comments were sparse. I have updated the Readme, and added a sample json-fetcher. Hope this is more useful for you.
Another unwritten TODO was to nudge the data providers towards a more streaming friendly compression formats - and then just create an index to fetch the data directly from their compressed archives. That would have saved everyone a LOT of $$$.
japronto
-
Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
100x faster than FastAPI seems easy. I wonder how it compares to other fast Python libraries like Japronto[1] and non-Python ones too.
1 - https://github.com/squeaky-pl/japronto
-
A Look on Python Web Performance at the end of 2022
The source code from the project resides in the github, with more than 8.6k stars and 596 forks is a very popular github, but no new releases are made since 2018, looks pure much not maintained anymore, no PR's are accepted no Issues are closed, still without windows or macOS Silicon, or PyPy3 support. Japronto it self uses uvloop with more than 9k stars and 521 forks and different from japronto is seems to be well maintained.
- Screaming-fast, scalable, asynchronous Python 3.5 HTTP toolkit
What are some alternatives?
is2 - embedded RESTy http(s) server library from Edgio
socketify.py - Bringing Http/Https and WebSockets High Performance servers for PyPy3 and Python3
semi_index - Implementation of the JSON semi-index described in the paper "Semi-Indexing Semi-Structured Data in Tiny Space"
oha - Ohayou(おはよう), HTTP load generator, inspired by rakyll/hey with tui animation.
json_benchmark - Python JSON benchmarking and "correctness".
yyjson - The fastest JSON library in C
reddit_mining
jq-zsh-plugin - jq zsh plugin
ucall - Web Serving and Remote Procedure Calls at 50x lower latency and 70x higher bandwidth than FastAPI, implementing JSON-RPC & REST over io_uring ☎️
Apache Arrow - Apache Arrow is the universal columnar format and multi-language toolbox for fast data interchange and in-memory analytics
vibora - Fast, asynchronous and elegant Python web framework.