wayback-machine-spn-scripts
archivenow
wayback-machine-spn-scripts | archivenow | |
---|---|---|
8 | 4 | |
92 | 391 | |
- | 1.0% | |
1.6 | 3.3 | |
7 days ago | 3 months ago | |
Shell | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wayback-machine-spn-scripts
-
Preserving Parliamentary Proposed Bills to Wayback Machine
I created this script which scrapes a list of all currently proposed bill URLs, and all PDFs of those bills, and the pages that list them. It then runs this script by /u/overcast07 that then goes through each of those URLs and backs them up.
-
Wayback machine - Schedule automatic backups - Part 2
I discovered that The Internet Archive's Wayback Machine has the "Save Page Now" tool, that allows you to manually backup a page.Through further research, and after asking here on Reddit, I discovered this script: https://github.com/overcast07/wayback-machine-spn-scripts
-
Best way to feed Wayback Machine a list of URLs?
I use this https://github.com/overcast07/wayback-machine-spn-scripts
-
Most of the time I try to save a Reddit thread on Internet Archive Wayback Machine, it fails to save. Can this be fixed?
Try using spn.sh if you can. In my experience, it's been more reliable than using wayback machine's front-end.
-
[Request] Userscript that clicks button on webpage after 'X' minutes and 'Y' seconds once.
I know I could use the excellent spn.sh in a while loop instead.
- Wayback Machine Downloader โ Download an Entire Website from the Wayback Machine
- I wrote a Bash script that interfaces with Wayback Machine Save Page Now (automatic error handling, can submit selective/recursive outlinks)
- Shell script for Wayback Machine Save Page Now (has auto error handling, selective/recursive outlinks)
archivenow
-
Best way to feed Wayback Machine a list of URLs?
I crawled a website I want to make sure is completely captured by Wayback Machine but now I need to figure out how to efficiently "feed" all the URLs into Wayback. I found archivenow but I'm terrible at Python so I'm not sure the best way to direct the program at the txt file and preferably create another txt/csv file listing the original url with the new archived url. Any help would be greatly appreciated!
-
Match Thread: West Brom vs Liverpool | Premier League
#!/bin/bash function __longnow(){ # Use: Takes a txt file with one link on each line and pushes all the links to the internet archive # References: # https://unix.stackexchange.com/questions/181254/how-to-use-grep-and-cut-in-script-to-obtain-website-urls-from-an-html-file # https://github.com/oduwsdl/archivenow # For the double underscore, see: https://stackoverflow.com/questions/13797087/bash-why-double-underline-for-private-functions-why-for-bash-complet/15181999 input=$1 counter=1 while IFS= read -r line do wait if [ $(($counter % 15)) -eq 0 ] then printf "\nArchive.org doesn't accept more than 15 links per min; sleeping for 1min...\n" sleep 1m fi echo "Url: $line" archivenow --ia $line >& 1 ## alternatively, archivenow --all $line >& 1 if you want to use all archive services rather than just the internet archive counter=$((counter+1)) done < "$input" } echo 'Gaza' | sed 's/^.*: //' | sed 's/ /%20/g' | sed 's/^/https://news.google.com/rss/search?q=/' | xargs wget --quiet > /dev/null 2>&1 & wait ## This gets news about Gaza from the Google News API/XML endpoint echo "Gaza" | sed 's/^/search?q=/' | sed 's/^/"/;s/$/"/' | xargs xmllint --format 2>/dev/null | grep "title|pubDate|link" | sed 's/.*>(.*)<.*/\1/' | sed '0~3 a\' >> listofnews.txt ## This parses the xml and appends data about each article to a file called "list of news" echo "Gaza" | sed 's/^/search?q=/' | sed 's/^/"/;s/$/"/' | xargs xmllint --format 2>/dev/null | grep "link" | sed 's/.*>(.*)<.*/\1/' > tempforarchiver.txt ## This just gets the links and creates something to be fed to an archiver service. __longnow tempforarchiver.txt rm search?q=Gaza rm tempforarchiver.txt ## Add this to cron with something like ## $ crontab -e ## 30 22 * * * /the/location/of/this/file ### Without the "#" ## This might give you some grief if bash or the archivenow utility can't be found from within the cron instance.
- Archiving the Gaza conflict
- How to easily save web pages to the Internet Archive's Wayback Machine
What are some alternatives?
reveddit - Review removed content on reddit. Uses the Pushshift API, built on code from removeddit.
videoduplicatefinder - Video Duplicate Finder - Crossplatform
savepagenow - A simple Python wrapper and command-line interface for archive.orgโs "Save Page Now" capturing service
waybackpack - Download the entire Wayback Machine archive for a given URL.
wayback - IA's public Wayback Machine (moved from SourceForge)
wayback-machine-downloader - Download an entire website from the Wayback Machine.
warrick - Recover lost websites from the Web Infrastructure
ArchiveBox - ๐ Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
libreddit - Private front-end for Reddit
wayback - A bot for Telegram, Mastodon, Slack, and other messaging platforms archives webpages.