waybackpy
Wayback Machine API interface & a command-line tool (by akamhy)
pywb
Core Python Web Archiving Toolkit for replay and recording of web archives (by webrecorder)
waybackpy | pywb | |
---|---|---|
6 | 7 | |
435 | 1,309 | |
- | 1.6% | |
0.0 | 7.2 | |
3 months ago | 8 days ago | |
Python | JavaScript | |
MIT License | GNU General Public License v3.0 only |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
waybackpy
Posts with mentions or reviews of waybackpy.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-05.
-
download all captures of a page in archive.org
I ended up using waybackpy python module to retrieve archived URLs, it worked well. I think the feature you want for this is the "snapshots", but I didn't test this myself
-
Well worth the price
I have read it as someone told me that it has a section for using waybackpy, a tool/library that I wrote and maintain.
- any way to archive all my bookmarks on archive.org?
-
Comex update 9/27/2022
import requests from datetime import datetime from pathlib import Path from waybackpy import WaybackMachineSaveAPI #https://github.com/akamhy/waybackpy ARCHIVE = False LOCAL_SAVE = True #https://www.cmegroup.com/clearing/operations-and-deliveries/nymex-delivery-notices.html urls = [ #COMEX & NYMEX Metal Delivery Notices "https://www.cmegroup.com/delivery_reports/MetalsIssuesAndStopsReport.pdf", "https://www.cmegroup.com/delivery_reports/MetalsIssuesAndStopsMTDReport.pdf", "https://www.cmegroup.com/delivery_reports/MetalsIssuesAndStopsYTDReport.pdf", #NYMEX Energy Delivery Notice "https://www.cmegroup.com/delivery_reports/EnergiesIssuesAndStopsReport.pdf", "https://www.cmegroup.com/delivery_reports/EnergiesIssuesAndStopsYTDReport.pdf", #Warehouse & Depository Stocks "https://www.cmegroup.com/delivery_reports/Gold_Stocks.xls", "https://www.cmegroup.com/delivery_reports/Gold_Kilo_Stocks.xls", "https://www.cmegroup.com/delivery_reports/Silver_stocks.xls", "https://www.cmegroup.com/delivery_reports/Copper_Stocks.xls", "https://www.cmegroup.com/delivery_reports/PA-PL_Stck_Rprt.xls", "https://www.cmegroup.com/delivery_reports/Aluminum_Stocks.xls", "https://www.cmegroup.com/delivery_reports/Zinc_Stocks.xls", "https://www.cmegroup.com/delivery_reports/Lead_Stocks.xls" ] user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36' #required for both wayback and cmegroup.com headers = {'User-Agent': user_agent} #present yourself as an updated Chrome browser if ARCHIVE: for url in urls: filename = url.split("/")[-1] print(f"Archiving {filename} on Wayback Machine...") save_api = WaybackMachineSaveAPI(url, user_agent) #limited to 15 requests / minute / IP. My VPN IP was already throttled :( Couldn't even get this to work with normal IP. Returned 429 error.... res = save_api.save() print(f"Res: {res}") if LOCAL_SAVE: datestr = datetime.now().strftime('%m-%d-%Y') datedir = Path(datestr) datedir.mkdir(exist_ok=True) for url in urls: filename = url.split("/")[-1] print(f"Fetching {filename}...") try: resp = requests.get(url, timeout=3, allow_redirects=True, headers=headers) if resp.ok: filepath = datedir / filename if not filepath.exists(): with open(filepath, mode="wb") as f: f.write(resp.content) else: print(f"ERROR: Filepath already exists: {filepath}") else: print(f"ERROR: response for {filename}: {resp}") except requests.ReadTimeout: print("timeout")
- Is there a way to download all the files Internet Archive has captured for a domain? I am trying to recover tweets from a suspended twitter account, but the account as a whole was never captured in the Wayback Machine, just some individual tweets and json files.
-
简单run个脚本使用 wayback machine 接口批量备份知乎问题冲塔回答
一个封装 wayback machine 接口的 package, github地址:https://github.com/akamhy/waybackpy
pywb
Posts with mentions or reviews of pywb.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-10-07.
-
Is there any good software for deduping (deduplicating) content in WARC files?
I have thousands of bookmarks on raindrop.io that I've been wanting to archive for a while. However, I've archived ~150 pages so far with Pywb and it ended up being 500MB across two WARCs, even with the dedupe setting specified in my settings file. It dedupes while archiving pages. I want software to get any spots missed and be sure that WARCs are actually deduped.
-
Is there a way to easily and reliably SSH to my laptop no matter what wifi the laptop is connected to? I have no clue.
I don't know if the solution would be related or relevant to this, but I would also want to be able to remotely launch and access a web server, Pywb, on Safari on my iPad, also no matter what wifi I'm on. On a Mac, it would be launched with the command wayback and the server would be accessed on the Browser with localhost:8080.
-
I can't install a Python package, pywb, looks like a problem with brotlipy. What can I do?
Check their github site. I would try "git clone https://github.com/webrecorder/pywb `
-
Purevolume archives?
I've been trying to open those large warc files these days. I've tried webrecorder, replayweb, pywb and warcat before but none of these worked well for me.
-
Ran grab-site now have some warc.gz files etc, the site in question was originally hosted in a mixture of html and javascript, what's the best and easiest way to make this accessible as a user for offline personal use?
pywb, but it requires creating a full copy of the data: https://github.com/webrecorder/pywb/issues/408
-
How good is ArchiveWeb.page?
I found it to be good with loading small WARCs quickly, but it can longer if the WARC is larger. Webarchive player, while it's old and discontinued, I've found it work better than Webrecorder Player and replayweb.page. If you want newer software to replay WARCs, try Pywb. I find it to be the best WARC player.
-
Saving all browsed websites automatically
I use pywb in proxy recording mode.
What are some alternatives?
When comparing waybackpy and pywb you can also consider the following projects:
wayback-machine-scraper - A command-line utility and Scrapy middleware for scraping time series data from Archive.org's Wayback Machine.
conifer - Collect and revisit web pages.
TikUp - An auto downloader and uploader for TikTok videos.
warcio - Streaming WARC/ARC library for fast web archive IO