archivenow
waybackpy
archivenow | waybackpy | |
---|---|---|
4 | 6 | |
391 | 405 | |
1.0% | - | |
3.3 | 0.0 | |
4 months ago | over 1 year ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
archivenow
-
Best way to feed Wayback Machine a list of URLs?
I crawled a website I want to make sure is completely captured by Wayback Machine but now I need to figure out how to efficiently "feed" all the URLs into Wayback. I found archivenow but I'm terrible at Python so I'm not sure the best way to direct the program at the txt file and preferably create another txt/csv file listing the original url with the new archived url. Any help would be greatly appreciated!
-
Match Thread: West Brom vs Liverpool | Premier League
#!/bin/bash function __longnow(){ # Use: Takes a txt file with one link on each line and pushes all the links to the internet archive # References: # https://unix.stackexchange.com/questions/181254/how-to-use-grep-and-cut-in-script-to-obtain-website-urls-from-an-html-file # https://github.com/oduwsdl/archivenow # For the double underscore, see: https://stackoverflow.com/questions/13797087/bash-why-double-underline-for-private-functions-why-for-bash-complet/15181999 input=$1 counter=1 while IFS= read -r line do wait if [ $(($counter % 15)) -eq 0 ] then printf "\nArchive.org doesn't accept more than 15 links per min; sleeping for 1min...\n" sleep 1m fi echo "Url: $line" archivenow --ia $line >& 1 ## alternatively, archivenow --all $line >& 1 if you want to use all archive services rather than just the internet archive counter=$((counter+1)) done < "$input" } echo 'Gaza' | sed 's/^.*: //' | sed 's/ /%20/g' | sed 's/^/https://news.google.com/rss/search?q=/' | xargs wget --quiet > /dev/null 2>&1 & wait ## This gets news about Gaza from the Google News API/XML endpoint echo "Gaza" | sed 's/^/search?q=/' | sed 's/^/"/;s/$/"/' | xargs xmllint --format 2>/dev/null | grep "title|pubDate|link" | sed 's/.*>(.*)<.*/\1/' | sed '0~3 a\' >> listofnews.txt ## This parses the xml and appends data about each article to a file called "list of news" echo "Gaza" | sed 's/^/search?q=/' | sed 's/^/"/;s/$/"/' | xargs xmllint --format 2>/dev/null | grep "link" | sed 's/.*>(.*)<.*/\1/' > tempforarchiver.txt ## This just gets the links and creates something to be fed to an archiver service. __longnow tempforarchiver.txt rm search?q=Gaza rm tempforarchiver.txt ## Add this to cron with something like ## $ crontab -e ## 30 22 * * * /the/location/of/this/file ### Without the "#" ## This might give you some grief if bash or the archivenow utility can't be found from within the cron instance.
- Archiving the Gaza conflict
- How to easily save web pages to the Internet Archive's Wayback Machine
waybackpy
-
download all captures of a page in archive.org
I ended up using waybackpy python module to retrieve archived URLs, it worked well. I think the feature you want for this is the "snapshots", but I didn't test this myself
-
Well worth the price
I have read it as someone told me that it has a section for using waybackpy, a tool/library that I wrote and maintain.
- any way to archive all my bookmarks on archive.org?
-
Comex update 9/27/2022
import requests from datetime import datetime from pathlib import Path from waybackpy import WaybackMachineSaveAPI #https://github.com/akamhy/waybackpy ARCHIVE = False LOCAL_SAVE = True #https://www.cmegroup.com/clearing/operations-and-deliveries/nymex-delivery-notices.html urls = [ #COMEX & NYMEX Metal Delivery Notices "https://www.cmegroup.com/delivery_reports/MetalsIssuesAndStopsReport.pdf", "https://www.cmegroup.com/delivery_reports/MetalsIssuesAndStopsMTDReport.pdf", "https://www.cmegroup.com/delivery_reports/MetalsIssuesAndStopsYTDReport.pdf", #NYMEX Energy Delivery Notice "https://www.cmegroup.com/delivery_reports/EnergiesIssuesAndStopsReport.pdf", "https://www.cmegroup.com/delivery_reports/EnergiesIssuesAndStopsYTDReport.pdf", #Warehouse & Depository Stocks "https://www.cmegroup.com/delivery_reports/Gold_Stocks.xls", "https://www.cmegroup.com/delivery_reports/Gold_Kilo_Stocks.xls", "https://www.cmegroup.com/delivery_reports/Silver_stocks.xls", "https://www.cmegroup.com/delivery_reports/Copper_Stocks.xls", "https://www.cmegroup.com/delivery_reports/PA-PL_Stck_Rprt.xls", "https://www.cmegroup.com/delivery_reports/Aluminum_Stocks.xls", "https://www.cmegroup.com/delivery_reports/Zinc_Stocks.xls", "https://www.cmegroup.com/delivery_reports/Lead_Stocks.xls" ] user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36' #required for both wayback and cmegroup.com headers = {'User-Agent': user_agent} #present yourself as an updated Chrome browser if ARCHIVE: for url in urls: filename = url.split("/")[-1] print(f"Archiving {filename} on Wayback Machine...") save_api = WaybackMachineSaveAPI(url, user_agent) #limited to 15 requests / minute / IP. My VPN IP was already throttled :( Couldn't even get this to work with normal IP. Returned 429 error.... res = save_api.save() print(f"Res: {res}") if LOCAL_SAVE: datestr = datetime.now().strftime('%m-%d-%Y') datedir = Path(datestr) datedir.mkdir(exist_ok=True) for url in urls: filename = url.split("/")[-1] print(f"Fetching {filename}...") try: resp = requests.get(url, timeout=3, allow_redirects=True, headers=headers) if resp.ok: filepath = datedir / filename if not filepath.exists(): with open(filepath, mode="wb") as f: f.write(resp.content) else: print(f"ERROR: Filepath already exists: {filepath}") else: print(f"ERROR: response for {filename}: {resp}") except requests.ReadTimeout: print("timeout")
- Is there a way to download all the files Internet Archive has captured for a domain? I am trying to recover tweets from a suspended twitter account, but the account as a whole was never captured in the Wayback Machine, just some individual tweets and json files.
-
简单run个脚本使用 wayback machine 接口批量备份知乎问题冲塔回答
一个封装 wayback machine 接口的 package, github地址:https://github.com/akamhy/waybackpy
What are some alternatives?
wayback-machine-spn-scripts - Bash scripts which interact with Internet Archive Wayback Machine's Save Page Now
wayback-machine-scraper - A command-line utility and Scrapy middleware for scraping time series data from Archive.org's Wayback Machine.
videoduplicatefinder - Video Duplicate Finder - Crossplatform
TikUp - An auto downloader and uploader for TikTok videos.
wayback - A bot for Telegram, Mastodon, Slack, and other messaging platforms archives webpages.
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
wayback_archiver - Ruby gem to send URLs to Wayback Machine
ArchiveBox - 🗃 The open source self-hosted web archive. Takes browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more... [Moved to: https://github.com/ArchiveBox/ArchiveBox]
pywb - Core Python Web Archiving Toolkit for replay and recording of web archives