reddit-html-archiver
PushshiftDumps
reddit-html-archiver | PushshiftDumps | |
---|---|---|
12 | 40 | |
165 | 242 | |
- | - | |
1.8 | 8.1 | |
almost 4 years ago | 10 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
reddit-html-archiver
-
/r/planetside will be going private on June 12th, and will not be coming back until Reddit reverses course on API pricing
Other options, like https://github.com/libertysoft3/reddit-html-archiver are not working anymore (I tried it to create a self-hosted /r/planetside backup).
-
This Reddit Community Has Been Archived
Well done, now you should make it sane. No need to reinvent the wheel here. Just rewrite reddit-html-archiver to use the raw json from redarcs rather than the pushshift api.
-
r/okbuddyretard will be "completely wiped from existence" according to one of the mods
I've seen several banned subs archived using https://github.com/libertysoft3/reddit-html-archiver
- What are Your favorite tools to backup reddit data? (Text Posts, Media Content, Comments..)
-
Archiving as much of Soundgasm as possible
https://github.com/libertysoft3/reddit-html-archiver can accomplish step 1 out of the box Parse for every line including soundgasm and/or other domains you are targeting, and maybe run a dedupe on the list before download to lighten the load on yt-dl since it wasnt optimized for that last I checked that deep (which is YEEEEARS ago fwiw)
- I’m leaving Reddit. If there’s a mass movement to do something about what’s happening, let me know.
- /r/NoNewNormal has been banned by Reddit. A good reminder that Reddit is run by fascists, and that all the subreddits that petitioned for this are book-burners. Are you a developer? Help us program the alternative. See comments for details.
- Welcome my r/NoNewNormal bretheren
- r/NoNewNormal has been banned!
-
Is there a way I can archieve the r/lounge subreddit?
You could try using https://github.com/libertysoft3/reddit-html-archiver which is the software we use to power our reddit archiving efforts over at https://the-eye.eu/r/
PushshiftDumps
-
Pushshift Dumps Help: Only getting submissions, that are named comments
I am trying to get comments and submissions from specific subreddits. So far, I've run the u/watchful1 script combine_folder_mutipleprocess.py and have been able process a few files.
-
Create and Search In Your Own Reddit Database
FYI, you can use my filter_file.py script to directly extract out submissions with a certain title. There's a place you can put in a file with a list of keywords to filter on if you have a lot of them. Or it would be fairly easy to modify to use a regex. There are also steps listed to export the list of submission ids and then filter a comments file to only comments from those submissions. You can also export directly to CSV, though you would want to use zst files for any intermediate steps. Let me know if anything in there doesn't work.
-
Reddit starting to bring back deleted comments.
This repo has good examples of scripts to use them, https://github.com/Watchful1/PushshiftDumps
-
Encountered a non-utf8 character
def read_redditfile(file: str) -> dict: """ Iterate over the pushshift JSON lines, yielding them as Python dicts. Decompress iteratively if necessary. """ # older files in the dataset are uncompressed while newer ones use zstd compression and have .xz, .bz2, or .zst endings if not file.endswith('.bz2') and not file.endswith('.xz') and not file.endswith('.zst'): with open(file, 'r', encoding='utf-8') as infile: for line in infile: l = json.loads(line) yield(l) else: # code by Watchful1 written for the Pushshift offline dataset, found here: https://github.com/Watchful1/PushshiftDumps with open(file, 'rb') as fh: dctx = ZstdDecompressor(max_window_size=2147483648) with dctx.stream_reader(fh) as reader: previous_line = "" while True: chunk = reader.read(2**24) # 16mb chunks if not chunk: break string_data = chunk.decode('utf-8') lines = string_data.split("\n") for i, line in enumerate(lines[:-1]): if i == 0: line = previous_line + line comment = json.loads(line) yield comment previous_line = lines[-1]
-
What to do after decompressing the files from academic torrents?
Just look a folder down in the github repo https://github.com/Watchful1/PushshiftDumps/tree/master/scripts the scripts are still there.
-
What are you using to browse/self host downloaded reddit?
I am working with the ZST files downloaded from Pushshift and sorted into subreddits by the lovely u/watchful1 here. ZST is too compressed to browse on its own but using scripts like this one you can process them into readable NDJSON files. From there im not sure what to do. I would like to have a self hosted reddit-clone that i can import these dumps into and browse freely.
-
Tell HN: My Reddit account was banned after adding my subs to the protest
The whole reddit (posts and comments separately) from 2005-06 until 2022-12 is on this [1] torrent link, it's very easy to download, extract and use the data [2]. I'm writing my thesis about the connection between the reddit post's type and the comment structure, and I've been working with this data, for a few months, it's amazing.
[1] https://academictorrents.com/details/7c0645c94321311bb05bd87...
[2] https://github.com/Watchful1/PushshiftDumps
-
Reddit, API calls, and AI - Who does your knowledge belong to?
Sure! You can download the compressed data from this torrent, then you can use this project if you want to just decompress and process the data.
-
Script to find overlapping users between subreddits from dump files
You can go through the process outlined in that thread to download the subreddit's you're interested in, then add them at the top of the new script, run it and it will output the list of overlapping users. It will actually likely be faster than the old script even counting download times for the dumps since the api was so slow. Though you are limited to the available 20k subreddits.
-
This Reddit Community Has Been Archived
how I read the file? First I got tried to extrat the file ok I got it, but them I text file I can't read that., I saw a few people saing it was just a json file I tried with a json reader but it say the json data is invalid, them I tried this program but nothing happens no new file is created or something, here a print, maybe I'm doing something wrong but I don't know because the script don't have any instruction how to use it!
What are some alternatives?
redscarepod-archive
Sketchpad
saidit - The reddit open source fork powering SaidIt
Pushshift-Importer
redditPostArchiver - Easily archive important Reddit post threads onto your computer
RedditLemmyImporter - 🔥 Anti-Reddit Aktion 🔥
eternity - bypass Reddit's 1000-item listing limits by externally storing your Reddit items (saved, created, upvoted, downvoted, hidden) in your own database
zreader - Read compressed NDJSON .zst files easily
ripme - Downloads albums in bulk
7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard
gwaripper - Tool for conveniently downloading audios from r/gonewildaudio and similar subreddits
reddit-project-public