reddit_mining
json-buffet
reddit_mining | json-buffet | |
---|---|---|
4 | 2 | |
11 | 0 | |
- | - | |
2.6 | 3.0 | |
10 months ago | about 1 year ago | |
HTML | C++ | |
Creative Commons Zero v1.0 Universal | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
reddit_mining
-
Analyzing multi-gigabyte JSON files locally
zstd decompression should almost always be very fast. It's faster to decompress than DEFLATE or LZ4 in all the benchmarks that I've seen.
you might be interested in converting the pushshift data to parquet. Using octosql I'm able to query the submissions data (from the begining of reddit to Sept 2022) in about 10 min
https://github.com/chapmanjacobd/reddit_mining#how-was-this-...
Although if you're sending the data to postgres or BigQuery you can probably get better query performance via indexes or parallelism.
- reddit_mining - List of all Subreddits
- Show HN: List of All Subreddits
- Top 50k Subreddits
json-buffet
-
Analyzing multi-gigabyte JSON files locally
And here's the code: https://github.com/multiversal-ventures/json-buffet
The API isn't the best. I'd have preferred an iterator based solution as opposed to this callback based one. But we worked with what rapidjson gave us for the proof of concept.
-
Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
Ha! Thanks to you, Today I found out how big those uncompressed JSON files really are (the data wasn't accessible to me, so i shared the tool with my colleague and he was the one who ran the queries on his laptop): https://www.dolthub.com/blog/2022-09-02-a-trillion-prices/ .
And yep, it was more or less they way you did with ijson. I found ijson just a day after I finished the prototype. Rapidjson would probably be faster. Especially after enabling SIMD. But the indexing was a one time thing.
We have open sourced the codebase. Here's the link: https://github.com/multiversal-ventures/json-buffet . Since this was a quick and dirty prototype, comments were sparse. I have updated the Readme, and added a sample json-fetcher. Hope this is more useful for you.
Another unwritten TODO was to nudge the data providers towards a more streaming friendly compression formats - and then just create an index to fetch the data directly from their compressed archives. That would have saved everyone a LOT of $$$.
What are some alternatives?
json-streamer - A fast streaming JSON parser for Python that generates SAX-like events using yajl
japronto - Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.
semi_index - Implementation of the JSON semi-index described in the paper "Semi-Indexing Semi-Structured Data in Tiny Space"
jq-zsh-plugin - jq zsh plugin
is2 - embedded RESTy http(s) server library from Edgio
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
json_benchmark - Python JSON benchmarking and "correctness".
zsv - zsv+lib: tabular data swiss-army knife CLI + world's fastest (simd) CSV parser
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
lnav - Log file navigator
ClickHouse - ClickHouse® is a free analytics DBMS for big data