awesome-web-archiving
wikiteam
awesome-web-archiving | wikiteam | |
---|---|---|
13 | 23 | |
1,818 | 688 | |
2.1% | 1.3% | |
5.2 | 3.8 | |
4 days ago | about 1 month ago | |
Python | ||
Creative Commons Zero v1.0 Universal | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
awesome-web-archiving
-
Show HN: OpenAPI DevTools – Chrome ext. that generates an API spec as you browse
https://github.com/iipc/awesome-web-archiving/blob/main/READ...
-
DPReview.com is going down effective April 10.
People have pasted this around, https://github.com/iipc/awesome-web-archiving Could probably do it with wget if you had enough time?
- DPReview.com to close on April 10 after 25 years of operation
-
This Layoff Does Not Exist: tech layoff announcements but weird
Maybe something on this list can help you https://github.com/iipc/awesome-web-archiving
-
Software to keep Website pages "alive"?
Awesome Web Archiving has a longer list of tools and software
-
How to Download All of Wikipedia onto a USB Flash Drive
Not related to the OP topic or zim but I was looking into archiving my bookmarks and other content like documentation sites and wikis. I'll list some of the things I ended up using.
ArchiveBox[1]: Pretty much a self-hosted wayback machine. It can save websites as plain html, screenshot, text, and some other formats. I have my bookmarks archived in it and have a bookmarklet to easily add new websites to it. If you use the docker-compose you can enable a full-text search backend for an easy search setup.
WebRecorder[2]: A browser extension that creates WACZ archives directly in the browser capturing exactly what content you load. I use it on sites with annoying dynamic content that sites like wayback and ArchiveBox wouldn't be able to copy.
ReplayWeb[3]: An interface to browse archive types like WARC, WACZ, and HAR. The interface is just like browsing through your browser. It can be self-hosted as well for the full offline experience.
browsertrix-crawler[4]: A CLI tool to scrape websites and output to WACZ. Its super easy to run with Docker and I use it to scrape entire blogs and docs for offline use. It uses Chrome to load webpages and has some extra features like custom browser profiles, interactive login, and autoscroll/autoplay. I use the `--generateWACZ` parameter so I can use ReplayWeb to easily browse through the final output.
For bookmark and misc webpage archiving then ArchiveBox should be more than enough. Check out this repo for an amazing list of tools and resources https://github.com/iipc/awesome-web-archiving
[1] https://github.com/ArchiveBox/ArchiveBox
- Self Hosted Roundup #14
- SingleFile: Save a Complete Web Page into a Single HTML File
- [HELP] Starting Out for a Beginner
- Reflections as the Internet Archive turns 25
wikiteam
-
Miraheze to Shut Down
WikiTeam is working on the archival, with the usual XML dumps and image dumps. You can follow updates and see how to help:
https://github.com/WikiTeam/wikiteam/issues/465#issuecomment...
https://wiki.archiveteam.org/index.php/Miraheze
Already before the announcement we had XML dumps for thousands of Miraheze wikis.
- Dan Parker has accidentally deleted Yugipedia without recent backup
-
Questions about mirroring fandom/wiki sites
The thread linked has the information you need. Read me on the Github page for WikiTeam's dump generator.
- WikiTeam: We archive wikis, from Wikipedia to tiniest wikis
- PSA: Fandom has acquired GameSpot, GameFAQ’s, metacritic and more.
-
Best way to archive a wiki "Powered by MediaWiki"
ArchiveTeam WikiTeam has download tooling: https://github.com/WikiTeam/wikiteam
-
Archiving Wiki (Fandom) Pages
Hi all - I'm trying to archive a number of fandom pages. Upon checking out this subreddit, I've found a few ways of doing so, and am currently working with the WikiTeam python tool (https://github.com/WikiTeam/wikiteam)
-
[Censorship] Fandom Wiki (formerly Wikia) is deleting wikis on sexual topics November 24, such as the Monster Girl Encyclopedia wiki
Httrack is a good choice for having a local copy of the wiki you can browse personally, but note that if you ever have to back up a wiki in a formal suitable for migrating to another wiki site, something like ArchiveTeam's WikiTeam tool would be suitable. It also has a built-in tool to upload the resulting backup to archive.org, like how someone has done so with the MGQ wiki here.
-
Fandom Wiki (formerly Wikia) is deleting wikis on sexual topics in 2 weeks
I found ArchiveTeam's WikiTeam tool relatively easy to use. I just had to download the repository from github (from the Code: Download Zip in the top right), have Python installed, open a command prompt in the folder, copy-paste the commands from their front page, have it fail complaining about missing modules, look up the command to install Python modules, and install the modules it needs. Their tutorial has additional instructions for uploading the resulting archives to archive.org and for downloading lists of wikis.
-
I need help with WikiTeam
If anyone has used this app please help me. I have followed the instruction in the readme.txt https://github.com/WikiTeam/wikiteam and I have the dumpgenerator.py but, when I run it with this commands:
What are some alternatives?
SingleFileZ - Web Extension to save a faithful copy of an entire web page in a self-extracting ZIP file
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
webscrapbook - A browser extension that captures web pages to local device or backend server for future retrieval, organization, annotation, and edit. This project inherits from legacy Firefox add-on ScrapBook X.
obelisk - Go package and CLI tool for saving web page as single HTML file
reddit-save - A Python tool for backing up your saved and upvoted posts on reddit to your computer.
SingleFile-MV3 - SingleFile version compatible with Manifest V3. The future, right now!
diskimageprocessor - Tool for automated processing of disk images in BitCurator
firefox-scrapbook - ScrapBook X – a legacy Firefox add-on that captures web pages to local device for future retrieval, organization, annotation, and edit.
rexport - Reddit takeout: export your account data as JSON: comments, submissions, upvotes etc. 🦖
youtube-dl - Command-line program to download videos from YouTube.com and other video sites
bitwarden-to-keepass - Export (most of) your Bitwarden items into KeePass (kdbx) database. That includes logins - with TOTP seeds, URIs, custom fields, attachments and secure notes