archivenow
A Tool To Push Web Resources Into Web Archives (by oduwsdl)
wayback-spn-client
Access the Internet Archive’s “Save Page Now” feature with Headless Chrome (by Mr0grog)
archivenow | wayback-spn-client | |
---|---|---|
4 | 1 | |
391 | 3 | |
1.0% | - | |
3.3 | 0.0 | |
3 months ago | over 5 years ago | |
Python | JavaScript | |
MIT License | BSD 3-clause "New" or "Revised" License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
archivenow
Posts with mentions or reviews of archivenow.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-12-14.
-
Best way to feed Wayback Machine a list of URLs?
I crawled a website I want to make sure is completely captured by Wayback Machine but now I need to figure out how to efficiently "feed" all the URLs into Wayback. I found archivenow but I'm terrible at Python so I'm not sure the best way to direct the program at the txt file and preferably create another txt/csv file listing the original url with the new archived url. Any help would be greatly appreciated!
-
Match Thread: West Brom vs Liverpool | Premier League
#!/bin/bash function __longnow(){ # Use: Takes a txt file with one link on each line and pushes all the links to the internet archive # References: # https://unix.stackexchange.com/questions/181254/how-to-use-grep-and-cut-in-script-to-obtain-website-urls-from-an-html-file # https://github.com/oduwsdl/archivenow # For the double underscore, see: https://stackoverflow.com/questions/13797087/bash-why-double-underline-for-private-functions-why-for-bash-complet/15181999 input=$1 counter=1 while IFS= read -r line do wait if [ $(($counter % 15)) -eq 0 ] then printf "\nArchive.org doesn't accept more than 15 links per min; sleeping for 1min...\n" sleep 1m fi echo "Url: $line" archivenow --ia $line >& 1 ## alternatively, archivenow --all $line >& 1 if you want to use all archive services rather than just the internet archive counter=$((counter+1)) done < "$input" } echo 'Gaza' | sed 's/^.*: //' | sed 's/ /%20/g' | sed 's/^/https://news.google.com/rss/search?q=/' | xargs wget --quiet > /dev/null 2>&1 & wait ## This gets news about Gaza from the Google News API/XML endpoint echo "Gaza" | sed 's/^/search?q=/' | sed 's/^/"/;s/$/"/' | xargs xmllint --format 2>/dev/null | grep "title|pubDate|link" | sed 's/.*>(.*)<.*/\1/' | sed '0~3 a\' >> listofnews.txt ## This parses the xml and appends data about each article to a file called "list of news" echo "Gaza" | sed 's/^/search?q=/' | sed 's/^/"/;s/$/"/' | xargs xmllint --format 2>/dev/null | grep "link" | sed 's/.*>(.*)<.*/\1/' > tempforarchiver.txt ## This just gets the links and creates something to be fed to an archiver service. __longnow tempforarchiver.txt rm search?q=Gaza rm tempforarchiver.txt ## Add this to cron with something like ## $ crontab -e ## 30 22 * * * /the/location/of/this/file ### Without the "#" ## This might give you some grief if bash or the archivenow utility can't be found from within the cron instance.
- Archiving the Gaza conflict
- How to easily save web pages to the Internet Archive's Wayback Machine
wayback-spn-client
Posts with mentions or reviews of wayback-spn-client.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-04-22.
-
How to easily save web pages to the Internet Archive's Wayback Machine
Download the wayback-spn-client for Linux. You'll need Nodejs but after that you run it by typing
What are some alternatives?
When comparing archivenow and wayback-spn-client you can also consider the following projects:
wayback-machine-spn-scripts - Bash scripts which interact with Internet Archive Wayback Machine's Save Page Now
videoduplicatefinder - Video Duplicate Finder - Crossplatform