json_benchmark
jsplit
json_benchmark | jsplit | |
---|---|---|
2 | 2 | |
26 | 61 | |
- | - | |
3.7 | 10.0 | |
over 1 year ago | about 2 years ago | |
Python | Go | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
json_benchmark
-
Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
If you're primarily targeting Python as an application layer, you may also want to check out my msgspec library[1]. All the perf benefits of e.g. yyjson, but with schema validation like pydantic. It regularly benchmarks[2] as the fastest JSON library for Python. Much of the overhead of decoding JSON -> Python comes from the python layer, and msgspec employs every trick I know to minimize that overhead.
[1]: https://github.com/jcrist/msgspec
[2]: https://github.com/TkTech/json_benchmark
-
Sunday Daily Thread: What's everyone working on this week?
- Adding nvme drive support to SMARTie, https://github.com/tktech/smartie, which is a pure-python cross-platform library for getting disk information like serial number, SMART attributes (like disk temperature) - json_benchmark, https://github.com/tktech/json_benchmark, which is a new benchmark and correctness test for the more modern Python JSON libraries - py_yyjson, https://github.com/tktech/py_yyjson, which is still a WIP and provides Python bindings to the yyjson library, which offers comparable speed to simdjson but more flexibility when parsing (comments, arbitrary sized numbers, Inf/Nan, etc) - And some fixes to https://github.com/TkTech/humanmark, which is a markdown library used to edit the README.md in json_benchmark above.
jsplit
-
Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
Regarding the hard way, this little utility does a great job of splitting larger than memory JSON documents into collections of NDJSON files:
https://github.com/dolthub/jsplit
- [OC] The ridiculously absurd amount of pricing data that insurance companies just publicly dumped
What are some alternatives?
japronto - Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.
data-analysis
JsonReader - A JSON pull parser for PHP
simdjson-go - Golang port of simdjson: parsing gigabytes of JSON per second
ustore - Multi-Modal Database replacing MongoDB, Neo4J, and Elastic with 1 faster ACID solution, with NetworkX and Pandas interfaces, and bindings for C 99, C++ 17, Python 3, Java, GoLang 🗄️
json-buffet
price-transparency-guide - The technical implementation guide for the tri-departmental price transparency rule.
is2 - embedded RESTy http(s) server library from Edgio
search-dw - search-dw is a Python utility to automate "search and download" via the command line. It might be useful if you need to download the results of a Google search for a certain type of topic at the same time