awesome-web-archiving
monolith
awesome-web-archiving | monolith | |
---|---|---|
13 | 23 | |
1,818 | 9,972 | |
2.1% | 24.1% | |
5.2 | 7.2 | |
4 days ago | about 1 month ago | |
Rust | ||
Creative Commons Zero v1.0 Universal | Creative Commons Zero v1.0 Universal |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
awesome-web-archiving
-
Show HN: OpenAPI DevTools – Chrome ext. that generates an API spec as you browse
https://github.com/iipc/awesome-web-archiving/blob/main/READ...
-
DPReview.com is going down effective April 10.
People have pasted this around, https://github.com/iipc/awesome-web-archiving Could probably do it with wget if you had enough time?
- DPReview.com to close on April 10 after 25 years of operation
-
This Layoff Does Not Exist: tech layoff announcements but weird
Maybe something on this list can help you https://github.com/iipc/awesome-web-archiving
-
Software to keep Website pages "alive"?
Awesome Web Archiving has a longer list of tools and software
-
How to Download All of Wikipedia onto a USB Flash Drive
Not related to the OP topic or zim but I was looking into archiving my bookmarks and other content like documentation sites and wikis. I'll list some of the things I ended up using.
ArchiveBox[1]: Pretty much a self-hosted wayback machine. It can save websites as plain html, screenshot, text, and some other formats. I have my bookmarks archived in it and have a bookmarklet to easily add new websites to it. If you use the docker-compose you can enable a full-text search backend for an easy search setup.
WebRecorder[2]: A browser extension that creates WACZ archives directly in the browser capturing exactly what content you load. I use it on sites with annoying dynamic content that sites like wayback and ArchiveBox wouldn't be able to copy.
ReplayWeb[3]: An interface to browse archive types like WARC, WACZ, and HAR. The interface is just like browsing through your browser. It can be self-hosted as well for the full offline experience.
browsertrix-crawler[4]: A CLI tool to scrape websites and output to WACZ. Its super easy to run with Docker and I use it to scrape entire blogs and docs for offline use. It uses Chrome to load webpages and has some extra features like custom browser profiles, interactive login, and autoscroll/autoplay. I use the `--generateWACZ` parameter so I can use ReplayWeb to easily browse through the final output.
For bookmark and misc webpage archiving then ArchiveBox should be more than enough. Check out this repo for an amazing list of tools and resources https://github.com/iipc/awesome-web-archiving
[1] https://github.com/ArchiveBox/ArchiveBox
- Self Hosted Roundup #14
- SingleFile: Save a Complete Web Page into a Single HTML File
- [HELP] Starting Out for a Beginner
- Reflections as the Internet Archive turns 25
monolith
-
🛠️Non-AI Open Source Projects that are 🔥
Monolith is a CLI tool for saving complete web pages as a single HTML file.
-
An Introduction to the WARC File
I have never used monolith to say with any certainty, but two things in your description are worth highlighting between the goals of WARC versus the umpteen bazillion "save this one page I'm looking at as a single file" type projects:
1. WARC is designed, as a goal, to archive the request-response handshake. It does not get into the business of trying to make it easy for a browser to subsequently display that content, since that's a browser's problem
2. Using your cited project specifically, observe the number of "well, save it but ..." options <https://github.com/Y2Z/monolith#options> which is in stark contrast to the archiving goals I just spoke about. It's not a good snapshot of history if the server responded with `content-type: text/html;charset=iso-8859-1` back in the 90s but "modern tools" want everything to be UTF-8 so we'll just convert it, shall we? Bah, I don't like JavaScript, so we'll just toss that out, shall we? And so on
For 100% clarity: monolith, and similar, may work fantastic for any individual's workflow, and I'm not here to yuck anyone's yum; but I do want to highlight that all things being equal it should always be possible to derive monolith files from warc files because the warc files are (or at least have the goal of) perfect fidelity of what the exchange was. I would guess only pcap files would be of higher fidelity, but also a lot more extraneous or potentially privacy violating details
- Reddit limits the use of API to 1000,Let's work together to save the content of StableDiffusion Subreddit as a team
-
nix-init: Create Nix packages with just the URL, with support for dependency inference, license detection, hash prefetching, and more
console $ nix-init default.nix -u https://github.com/Y2Z/monolith [...] (press enter to select the defaults) $ nix-build -E "(import { }).callPackage ./. { }" [...] $ result/bin/monilith --version monolith 2.7.0
-
What is the best free, least likely to discontinue, high data allowance app/service for saving articles/webpages permanently?
For example, here’s a command-line tool to save webpages as HTML files: https://github.com/Y2Z/monolith
- Offline Internet Archive
-
Rust Easy! Modern Cross-platform Command Line Tools to Supercharge Your Terminal
monolith: Convert any webpage into a single HTML file with all assets inlined.
-
Is there a way to (bulk) save all tabs as a pdf document in a quick way?
There is also a program (monolith: https://github.com/Y2Z/monolith) that does the same
-
Is there a good list of up-to-date data archiving tools for different websites?
besides wget, for single pages I use monolith https://github.com/Y2Z/monolith
-
Ask HN: Full-text browser history search forever?
You can pipe the URLs through something like monolith[1].
https://github.com/Y2Z/monolith
What are some alternatives?
SingleFileZ - Web Extension to save a faithful copy of an entire web page in a self-extracting ZIP file
SingleFile - Web Extension for saving a faithful copy of a complete web page in a single HTML file
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
obelisk - Go package and CLI tool for saving web page as single HTML file
SingleFile-MV3 - SingleFile version compatible with Manifest V3. The future, right now!
shrface - Extend eww/nov with org-mode features, archive web pages to org files with shr.
firefox-scrapbook - ScrapBook X – a legacy Firefox add-on that captures web pages to local device for future retrieval, organization, annotation, and edit.
archivy - Archivy is a self-hostable knowledge repository that allows you to learn and retain information in your own personal and extensible wiki.
youtube-dl - Command-line program to download videos from YouTube.com and other video sites
Wallabag - wallabag is a self hostable application for saving web pages: Save and classify articles. Read them later. Freely.