PushshiftDumps
PushshiftDumps | Pushshift-Importer | |
---|---|---|
40 | 7 | |
240 | 14 | |
- | - | |
8.1 | 2.0 | |
8 days ago | about 1 year ago | |
Python | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
PushshiftDumps
-
Pushshift Dumps Help: Only getting submissions, that are named comments
I am trying to get comments and submissions from specific subreddits. So far, I've run the u/watchful1 script combine_folder_mutipleprocess.py and have been able process a few files.
-
Create and Search In Your Own Reddit Database
FYI, you can use my filter_file.py script to directly extract out submissions with a certain title. There's a place you can put in a file with a list of keywords to filter on if you have a lot of them. Or it would be fairly easy to modify to use a regex. There are also steps listed to export the list of submission ids and then filter a comments file to only comments from those submissions. You can also export directly to CSV, though you would want to use zst files for any intermediate steps. Let me know if anything in there doesn't work.
-
Reddit starting to bring back deleted comments.
This repo has good examples of scripts to use them, https://github.com/Watchful1/PushshiftDumps
-
Encountered a non-utf8 character
def read_redditfile(file: str) -> dict: """ Iterate over the pushshift JSON lines, yielding them as Python dicts. Decompress iteratively if necessary. """ # older files in the dataset are uncompressed while newer ones use zstd compression and have .xz, .bz2, or .zst endings if not file.endswith('.bz2') and not file.endswith('.xz') and not file.endswith('.zst'): with open(file, 'r', encoding='utf-8') as infile: for line in infile: l = json.loads(line) yield(l) else: # code by Watchful1 written for the Pushshift offline dataset, found here: https://github.com/Watchful1/PushshiftDumps with open(file, 'rb') as fh: dctx = ZstdDecompressor(max_window_size=2147483648) with dctx.stream_reader(fh) as reader: previous_line = "" while True: chunk = reader.read(2**24) # 16mb chunks if not chunk: break string_data = chunk.decode('utf-8') lines = string_data.split("\n") for i, line in enumerate(lines[:-1]): if i == 0: line = previous_line + line comment = json.loads(line) yield comment previous_line = lines[-1]
-
What to do after decompressing the files from academic torrents?
Just look a folder down in the github repo https://github.com/Watchful1/PushshiftDumps/tree/master/scripts the scripts are still there.
-
What are you using to browse/self host downloaded reddit?
I am working with the ZST files downloaded from Pushshift and sorted into subreddits by the lovely u/watchful1 here. ZST is too compressed to browse on its own but using scripts like this one you can process them into readable NDJSON files. From there im not sure what to do. I would like to have a self hosted reddit-clone that i can import these dumps into and browse freely.
-
Tell HN: My Reddit account was banned after adding my subs to the protest
The whole reddit (posts and comments separately) from 2005-06 until 2022-12 is on this [1] torrent link, it's very easy to download, extract and use the data [2]. I'm writing my thesis about the connection between the reddit post's type and the comment structure, and I've been working with this data, for a few months, it's amazing.
[1] https://academictorrents.com/details/7c0645c94321311bb05bd87...
[2] https://github.com/Watchful1/PushshiftDumps
-
Reddit, API calls, and AI - Who does your knowledge belong to?
Sure! You can download the compressed data from this torrent, then you can use this project if you want to just decompress and process the data.
-
Script to find overlapping users between subreddits from dump files
You can go through the process outlined in that thread to download the subreddit's you're interested in, then add them at the top of the new script, run it and it will output the list of overlapping users. It will actually likely be faster than the old script even counting download times for the dumps since the api was so slow. Though you are limited to the available 20k subreddits.
-
This Reddit Community Has Been Archived
how I read the file? First I got tried to extrat the file ok I got it, but them I text file I can't read that., I saw a few people saing it was just a json file I tried with a json reader but it say the json data is invalid, them I tried this program but nothing happens no new file is created or something, here a print, maybe I'm doing something wrong but I don't know because the script don't have any instruction how to use it!
Pushshift-Importer
-
What are you using to browse/self host downloaded reddit?
I'm thinking i will have to get a project like redarc or BDFR-to-HTML or much more likely Pushshift-Importer which allows you to import pushshift downloads into a SQLite database. From there i would have to hook up the database to a reddit-like frontend.
-
[META] Hey mods, how about an AutoMod config to remove posts asking, "Am I too old?"
Just download the dumps from pushshift and then use Pushshift-Importer.
-
Rust template for parsing ZST files
I wrote my own rust based importer. Feel free to use types and such from that as well.
-
How do I correctly stream data from the dump files when they are in the weird json format and convert them to a csv.
I built a command line tool to import the dumps into sqlite if you want to give it a go. https://github.com/Paul-E/Pushshift-Importer
-
Data dumps
I wrote some code to do just this. Input the locations of the comments and submissions and it will produce an output sqlite file.
-
What are you using to analyze the pushift dumps ?
I created a pushshift importer for comments. You can find it here. It will import the comments into a sqlite database. It is written in rust and is very fast compared to python. It can import everything overnight if you have an SSD.
-
Performance of a 2TB comments database
If you stick with SQLite, you could try creating your own sequencer. Funnel all your writes into one thread on one process, and have that thread do the writing. That way there is only ever one possible writer on the DB at a time. Here is an example what I did when I built a tool to import comments from pushshift into SQLite. When I do this on an NVME drive and I am CPU bound on decompression and JSON parsing, so the DB isn't even a bottleneck.
What are some alternatives?
Sketchpad
redarc - Reddit archiver
RedditLemmyImporter - 🔥 Anti-Reddit Aktion 🔥
cloud-to-butt - Chrome extension that replaces occurrences of 'the cloud' with 'my butt'
zreader - Read compressed NDJSON .zst files easily
7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard
reddit-project-public
Lemmy - 🐀 A link aggregator and forum for the fediverse
Nuitka - Nuitka is a Python compiler written in Python. It's fully compatible with Python 2.6, 2.7, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, and 3.11. You feed it your Python app, it does a lot of clever things, and spits out an executable or extension module.
RedditScrape - Quick and dirty script to suck down the pr0n from Reddit before it's too late
redarcs-reader