Our great sponsors
-
ArchiveBox
🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
wayback-machine-spn-scripts
Bash scripts which interact with Internet Archive Wayback Machine's Save Page Now
I crawled a website I want to make sure is completely captured by Wayback Machine but now I need to figure out how to efficiently "feed" all the URLs into Wayback. I found archivenow but I'm terrible at Python so I'm not sure the best way to direct the program at the txt file and preferably create another txt/csv file listing the original url with the new archived url. Any help would be greatly appreciated!
Could grab https://archivebox.io/ to crawl the site and it'll automatically submit it to the IA, and you get a local warc too. :-)
I use this https://github.com/overcast07/wayback-machine-spn-scripts