SaaSHub helps you find the best software and product alternatives Learn more →
Reddit-archivar Alternatives
Similar projects and alternatives to reddit-archivar
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
reddit-archivar reviews and mentions
-
Ask HN: Is anyone working on a Reddit archive?
I was focussing mostly on cyber security related subreddits because the vulnerability and exploit discussions were of great value to me.
I built a little scraper in golang that stores the JSON data (instead of the HTML which the archive warrior stores) to save hdd storage.
I managed to discover and scrape around 80GB of reduced JSON data, but have no idea what to do with this now. I wanted to build myself a little minimalistic web interface so I can do a text/keyword search.
The problem with reddit's API is that it only shows 1000 entries over 10 pages in every api. Meaning hot/top/new, and search results are limited. If you have more links related to the keyword, you won't discover more.
So you need a very specific keyword list to be able to discover more posts.
[1] https://github.com/cookiengineer/reddit-archivar
-
Show HN: Reddit Archiving Tool
Inspired by the ongoing call-to-action by the Internet Archive team over at /r/DataHoarder [1], I've decided I want to try to preserve all cybersecurity related subreddits. [2]
For people that don't know what's going on: There's a likelihood that the try to monetize the Reddit API will lead to a lot of moderators quitting the platform, and it could be that a lot of subreddits are going to be set on private and/or their threads are going to be deleted. At least that's kind of the fear from the ongoing moderator strike.
In my case I learned a LOT from reddits' discussions about malware, exploits and how they work, and without those I certainly wouldn't be where I am today ... so I'm trying to preserve them.
As the Archive Warrior only scrapes the HTML directly to the Web Archive, I'm trying to preserve the data itself directly as JSON files; with intent to store it later on IPFS (having been inspired a couple days ago by the-eye-team's effort to archive RARBG on IPFS).
I just wanted to let people know here about the tool, and in case you want to archive your favorite subreddits, feel free to modify it.
There are some limitations though, because listings (new/hot/top/search) are all limited to 1000 entries, which means that the discovery of old threads is quite limited.
Keyword search increases the discovery of old threads. In my case I'm searching for a lot of keywords (like CVE, RCE, vulnerability etc) in order to discover more threads.
Would love to hear feedback, currently it's just a prototypical quick n' dirty tool because the threat of my favorite subreddits going dark is quite immediate. I tried to reduce as much noise from the schema as possible, and the tool is only archiving the subreddit threads and comments, with the idea to be able to scrape the websites/blog articles at a later point in time.
[1] https://old.reddit.com/r/DataHoarder/comments/142l1i0/archiveteam_has_saved_over_108_billion_reddit/
[2] https://github.com/cookiengineer/reddit-archivar
- ArchiveTeam has saved over 10.8 BILLION Reddit links so far. We need YOUR help running ArchiveTeam Warrior to archive subreddits before they're gone indefinitely after June 12th!
-
A note from our sponsor - SaaSHub
www.saashub.com | 9 May 2024
Stats
The primary programming language of reddit-archivar is Go.
Sponsored