archivy VS replayweb.page

Compare archivy vs replayweb.page and see what are their differences.

archivy

Archivy is a self-hostable knowledge repository that allows you to learn and retain information in your own personal and extensible wiki. (by archivy)

replayweb.page

Serverless replay of web archives directly in the browser (by webrecorder)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
archivy replayweb.page
23 24
3,146 604
0.7% 4.6%
4.4 7.6
8 months ago 6 days ago
Python JavaScript
MIT License GNU Affero General Public License v3.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

archivy

Posts with mentions or reviews of archivy. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-03-11.

replayweb.page

Posts with mentions or reviews of replayweb.page. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-29.
  • Ask HN: How can I back up an old vBulletin forum without admin access?
    9 projects | news.ycombinator.com | 29 Jan 2024
    You can try https://replayweb.page/ as a test for viewing a WARC file. I do think you'll run into problems though with wanting to browse interconnected links in a forum format, but try this as a first step.

    One potential option but definitely a bit more work would be, once you have all the warc files downloaded, you can open them all in python using the warctools module and maybe beautifulsoup and potentially parse/extract all of the data embedded in the WARC archives into your own "fresh" HTML webserver.

    https://github.com/internetarchive/warctools

  • Mozilla "MemoryCache" Local AI
    6 projects | news.ycombinator.com | 12 Dec 2023
    Also check out https://archiveweb.page which is open source, local, and lets you export archived data as WARC (ISO 28500). You can embed archives in web pages using their Web Component https://replayweb.page.
  • Best practices for archiving websites
    2 projects | /r/datacurator | 6 Dec 2023
    Use the Webrecorder tool suite https://webrecorder.net! It uses a new package file format for web archivss called WACZ (Web Archive Zipped) which produces a single file which you can store anywhere and playback offline. It automatically indexes different file formats such as PDFs or media files contained on the website and is versioned. You can record WACZ using the Chrome extension ArchiveWeb.page https://archiveweb.page/ or use the Internet Archive’s Save Page Now button to preserve a website and have the WACZ file sent to you via email: https://inkdroid.org/2023/04/03/spn-wacz/. There are also more sophisticated tools like the in-browser crawler ArchiveWeb.page Express https://express.archiveweb.page or the command-line crawler BrowserTrix https://webrecorder.net/tools#browsertrix-crawler. But manually recording using the Chrome extension is definitely the easiest and most reliable way. To play back the WACZ file just open it in the offline web-app ReplayWeb.page https://replayweb.page.
  • Webrecorder: Capture interactive websites and replay them at a later time
    6 projects | news.ycombinator.com | 1 Aug 2023
    (Disclaimer: I work at Webrecorder)

    Our automated crawler browsertrix-crawler (https://github.com/webrecorder/browsertrix-crawler) uses Puppeteer to run browsers that we archive in by loading pages, running behaviors such as auto-scroll, and then record the request/response traffic. We have some custom behavior for some social media and video sites to make sure that content is appropriate captured. It is a bit of a cat-and-mouse game as we have to continue to update these behaviors as sites change, but for the most part it works pretty well.

    The trickier part is in replaying the archived websites, as a certain amount of re-writing has to happen in order to make sure the HTML and JS are working with archived assets rather than the live web. One implementation of this is replayweb.page (https://github.com/webrecorder/replayweb.page), which does all of the rewriting client-side in the browser. This sets you interact with archived websites in WARC or WACZ format as if interacting with the original site.

  • Is there such a thing as a " Master Search Engine " for desktops and websites that can search for any keyword on the site and on the PC?
    2 projects | /r/DataHoarder | 4 Apr 2023
    Currently the only way I know of doing this is by making a WARC file of the site with something like ArchiveWeb and then opening the WARC file with something like ReplayWeb : https://replayweb.page/
  • DPReview is being Archived by the Archive Team
    6 projects | /r/photography | 21 Mar 2023
    Once archived, the entire site will be made available for anyone to browse on the internet archive. The entire .WARC will also be made available for anyone to download and view locally with a .WARC viewer such as Web Replay. You will be able to download the .WARC file from here.
  • How to Download All of Wikipedia onto a USB Flash Drive
    7 projects | news.ycombinator.com | 6 Oct 2022
  • Any very noob friendly way to extract images and videos from WARC files?
    2 projects | /r/Archiveteam | 17 Jul 2022
    I found this page here, but since I'm not able to try it out, I don't know if it actually works: https://replayweb.page/
  • Purevolume archives?
    4 projects | /r/Archiveteam | 16 May 2022
    I've been trying to open those large warc files these days. I've tried webrecorder, replayweb, pywb and warcat before but none of these worked well for me.
  • How to mirror multiple websites correctly?
    2 projects | /r/DataHoarder | 12 May 2022
    It's a completely different tool, but I like using grab-site https://github.com/archiveteam/grab-site . Try --wpull-args=--span-hosts='' or something to make it mirror all subdomains. It outputs in WARC format which can be read with a site like https://replayweb.page.

What are some alternatives?

When comparing archivy and replayweb.page you can also consider the following projects:

ArchiveBox - 🗃 The open source self-hosted web archive. Takes browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more... [Moved to: https://github.com/ArchiveBox/ArchiveBox]

ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...

promnesia - Another piece of your extended mind

LinkAce - LinkAce is a self-hosted archive to collect links of your favorite websites.

archiveweb.page - A High-Fidelity Web Archiving Extension for Chrome and Chromium based browsers!

monolith - ⬛️ CLI tool for saving complete web pages as a single HTML file

grab-site - The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns

unmark - An open source to do app for bookmarks.

Wallabag - wallabag is a self hostable application for saving web pages: Save and classify articles. Read them later. Freely.

kanception

warcprox - WARC writing MITM HTTP/S proxy