replayweb.page
pywb
replayweb.page | pywb | |
---|---|---|
24 | 7 | |
620 | 1,301 | |
2.4% | 1.0% | |
8.7 | 7.2 | |
22 days ago | 7 days ago | |
TypeScript | JavaScript | |
GNU Affero General Public License v3.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
replayweb.page
-
Ask HN: How can I back up an old vBulletin forum without admin access?
You can try https://replayweb.page/ as a test for viewing a WARC file. I do think you'll run into problems though with wanting to browse interconnected links in a forum format, but try this as a first step.
One potential option but definitely a bit more work would be, once you have all the warc files downloaded, you can open them all in python using the warctools module and maybe beautifulsoup and potentially parse/extract all of the data embedded in the WARC archives into your own "fresh" HTML webserver.
https://github.com/internetarchive/warctools
-
Mozilla "MemoryCache" Local AI
Also check out https://archiveweb.page which is open source, local, and lets you export archived data as WARC (ISO 28500). You can embed archives in web pages using their Web Component https://replayweb.page.
-
Best practices for archiving websites
Use the Webrecorder tool suite https://webrecorder.net! It uses a new package file format for web archivss called WACZ (Web Archive Zipped) which produces a single file which you can store anywhere and playback offline. It automatically indexes different file formats such as PDFs or media files contained on the website and is versioned. You can record WACZ using the Chrome extension ArchiveWeb.page https://archiveweb.page/ or use the Internet Archive’s Save Page Now button to preserve a website and have the WACZ file sent to you via email: https://inkdroid.org/2023/04/03/spn-wacz/. There are also more sophisticated tools like the in-browser crawler ArchiveWeb.page Express https://express.archiveweb.page or the command-line crawler BrowserTrix https://webrecorder.net/tools#browsertrix-crawler. But manually recording using the Chrome extension is definitely the easiest and most reliable way. To play back the WACZ file just open it in the offline web-app ReplayWeb.page https://replayweb.page.
-
Webrecorder: Capture interactive websites and replay them at a later time
(Disclaimer: I work at Webrecorder)
Our automated crawler browsertrix-crawler (https://github.com/webrecorder/browsertrix-crawler) uses Puppeteer to run browsers that we archive in by loading pages, running behaviors such as auto-scroll, and then record the request/response traffic. We have some custom behavior for some social media and video sites to make sure that content is appropriate captured. It is a bit of a cat-and-mouse game as we have to continue to update these behaviors as sites change, but for the most part it works pretty well.
The trickier part is in replaying the archived websites, as a certain amount of re-writing has to happen in order to make sure the HTML and JS are working with archived assets rather than the live web. One implementation of this is replayweb.page (https://github.com/webrecorder/replayweb.page), which does all of the rewriting client-side in the browser. This sets you interact with archived websites in WARC or WACZ format as if interacting with the original site.
-
phpBB3 forum owner dead. Webhost purging soon. Need to quickly archive a site
The .tar.gz is a normal mirror, you can open the phpBB3/index.html file in your browser (after unzipping) or tell your web server of choice to serve it as static files. The .warc. you can use https://replayweb.page/ to browse.
-
Is there such a thing as a " Master Search Engine " for desktops and websites that can search for any keyword on the site and on the PC?
Currently the only way I know of doing this is by making a WARC file of the site with something like ArchiveWeb and then opening the WARC file with something like ReplayWeb : https://replayweb.page/
-
DPReview is being Archived by the Archive Team
Once archived, the entire site will be made available for anyone to browse on the internet archive. The entire .WARC will also be made available for anyone to download and view locally with a .WARC viewer such as Web Replay. You will be able to download the .WARC file from here.
-
What are the best tools to archive a forum quickly?
I know how to work with WARC and WACZ files and can replay them using ReplayWeb by the way! : https://replayweb.page/ I know ReplayWeb lets me search the contents of WARC and WACZ files by keywords...
-
Finding the Forgotten Fotolog
Of course, the next question will be: Once / if I find the right file with the correct URL, how do I actually access that content? I assume it will involve searching through the massive WARC files using a dedicated software. I tried using replayweb.page, but many of the files seemed to be inaccessible.
- How to Download All of Wikipedia onto a USB Flash Drive
pywb
-
Is there any good software for deduping (deduplicating) content in WARC files?
I have thousands of bookmarks on raindrop.io that I've been wanting to archive for a while. However, I've archived ~150 pages so far with Pywb and it ended up being 500MB across two WARCs, even with the dedupe setting specified in my settings file. It dedupes while archiving pages. I want software to get any spots missed and be sure that WARCs are actually deduped.
-
Is there a way to easily and reliably SSH to my laptop no matter what wifi the laptop is connected to? I have no clue.
I don't know if the solution would be related or relevant to this, but I would also want to be able to remotely launch and access a web server, Pywb, on Safari on my iPad, also no matter what wifi I'm on. On a Mac, it would be launched with the command wayback and the server would be accessed on the Browser with localhost:8080.
-
I can't install a Python package, pywb, looks like a problem with brotlipy. What can I do?
Check their github site. I would try "git clone https://github.com/webrecorder/pywb `
-
Purevolume archives?
I've been trying to open those large warc files these days. I've tried webrecorder, replayweb, pywb and warcat before but none of these worked well for me.
-
Ran grab-site now have some warc.gz files etc, the site in question was originally hosted in a mixture of html and javascript, what's the best and easiest way to make this accessible as a user for offline personal use?
pywb, but it requires creating a full copy of the data: https://github.com/webrecorder/pywb/issues/408
-
How good is ArchiveWeb.page?
I found it to be good with loading small WARCs quickly, but it can longer if the WARC is larger. Webarchive player, while it's old and discontinued, I've found it work better than Webrecorder Player and replayweb.page. If you want newer software to replay WARCs, try Pywb. I find it to be the best WARC player.
-
Saving all browsed websites automatically
I use pywb in proxy recording mode.
What are some alternatives?
archiveweb.page - A High-Fidelity Web Archiving Extension for Chrome and Chromium based browsers!
conifer - Collect and revisit web pages.
grab-site - The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns
warcio - Streaming WARC/ARC library for fast web archive IO
archivy - Archivy is a self-hostable knowledge repository that allows you to learn and retain information in your own personal and extensible wiki.
awesome-selfhosted - A list of Free Software network services and web applications which can be hosted on your own servers
warcprox - WARC writing MITM HTTP/S proxy
SingleFileZ - Web Extension to save a faithful copy of an entire web page in a self-extracting ZIP file
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
22120 - 💾 Diskernet - Your preferred backup solution. It's like you're still online! Full text search archive from your browsing and bookmarks. Weclome! to the Diskernet: an internet on yer disk. Disconnect with Diskernet, an internet for the post-online apocalypse. Or the airplane WiFi. Or the site goes down. Or ... You get the picture. Get Diskernet. 80s logo. Formerly 22120 (project codename) ;P ;) xx;p [Moved to: https://github.com/i5ik/Diskernet]
TWINT - An advanced Twitter scraping & OSINT tool written in Python that doesn't use Twitter's API, allowing you to scrape a user's followers, following, Tweets and more while evading most API limitations.
webarchiveplayer - NOTE: This project is no longer being actively developed.. Check out Webrecorder Player for the latest player. https://github.com/webrecorder/webrecorderplayer-electron) (Legacy: Desktop application for browsing web archives (WARC and ARC)