json-buffet
json-toolkit
json-buffet | json-toolkit | |
---|---|---|
2 | 5 | |
0 | 67 | |
- | - | |
3.0 | 4.6 | |
about 1 year ago | about 1 year ago | |
C++ | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
json-buffet
-
Analyzing multi-gigabyte JSON files locally
And here's the code: https://github.com/multiversal-ventures/json-buffet
The API isn't the best. I'd have preferred an iterator based solution as opposed to this callback based one. But we worked with what rapidjson gave us for the proof of concept.
-
Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
Ha! Thanks to you, Today I found out how big those uncompressed JSON files really are (the data wasn't accessible to me, so i shared the tool with my colleague and he was the one who ran the queries on his laptop): https://www.dolthub.com/blog/2022-09-02-a-trillion-prices/ .
And yep, it was more or less they way you did with ijson. I found ijson just a day after I finished the prototype. Rapidjson would probably be faster. Especially after enabling SIMD. But the indexing was a one time thing.
We have open sourced the codebase. Here's the link: https://github.com/multiversal-ventures/json-buffet . Since this was a quick and dirty prototype, comments were sparse. I have updated the Readme, and added a sample json-fetcher. Hope this is more useful for you.
Another unwritten TODO was to nudge the data providers towards a more streaming friendly compression formats - and then just create an index to fetch the data directly from their compressed archives. That would have saved everyone a LOT of $$$.
json-toolkit
-
Show HN: Comma Separated Values (CSV) to Unicode Separated Values (USV)
CSV is great because excel can import it, but it can't import USV, so at that point, why use USV when you can use JSON?
https://github.com/tyleradams/json-toolkit/
-
Analyzing multi-gigabyte JSON files locally
> Also note that this approach generalizes to other text-based formats. If you have 10 gigabyte of CSV, you can use Miller for processing. For binary formats, you could use fq if you can find a workable record separator.
You can also generalize it without learning a new minilanguage by using https://github.com/tyleradams/json-toolkit which converts csv/binary/whatever to/from json
- Fq: Jq for Binary Formats
-
Show HN: Angle Grinder – A terminal app to slice, dice, and aggregate your logs
I really like this tool, but I'm not sure what it gets me more than jq (and https://github.com/tyleradams/json-toolkit to convert non-json to json).
What can angle grinder do better than jq?
- Show HN: Transform a CSV into a JSON and vice versa
What are some alternatives?
japronto - Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.
miller - Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON
semi_index - Implementation of the JSON semi-index described in the paper "Semi-Indexing Semi-Structured Data in Tiny Space"
ndjson - Streaming line delimited json parser + serializer
is2 - embedded RESTy http(s) server library from Edgio
angle-grinder - Slice and dice logs on the command line
reddit_mining
csv2json - Simple tool for converting CSVs to JSON
json_benchmark - Python JSON benchmarking and "correctness".
jq - Command-line JSON processor [Moved to: https://github.com/jqlang/jq]
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
nq - Unix command line queue utility