webcrystal
wikiteam
webcrystal | wikiteam | |
---|---|---|
3 | 23 | |
24 | 693 | |
- | 2.0% | |
10.0 | 3.8 | |
over 1 year ago | about 2 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
webcrystal
-
SearXNG is a free internet metasearch engine
While it lacks a search feature last I checked there's always https://github.com/davidfstr/webcrystal
One .py file. Only one dependency (urllib3).n with a little love the concept could become a full transparent proxy.
-
Offpunk 2.0
From the the project page it says:
> The offline content is stored in ~/.cache/offpunk/ as plain .gmi/.html files. The structure of the Gemini-space is tentatively recreated. One key element of the design is to avoid any database. The cache can thus be modified by hand, content can be removed, used or added by software other than offpunk.
One ambition I have it to setup
https://github.com/davidfstr/webcrystal
> An archiving HTTP proxy and on-disk archival format for websites.
so that all my regular web browsing is auto archived at some level.
It would sure be neat if the archive formats could be compatible. It would allow for a setup where everything I’ve seen with my eyes is then immediately accessible programmatically or in a terminal. I feel that could open some significant productive advantages, especially in the age of LLMs also in the terminal.
-
Auto-scraping web browser?
Webcrystal?
wikiteam
-
Miraheze to Shut Down
WikiTeam is working on the archival, with the usual XML dumps and image dumps. You can follow updates and see how to help:
https://github.com/WikiTeam/wikiteam/issues/465#issuecomment...
https://wiki.archiveteam.org/index.php/Miraheze
Already before the announcement we had XML dumps for thousands of Miraheze wikis.
- Dan Parker has accidentally deleted Yugipedia without recent backup
-
Questions about mirroring fandom/wiki sites
The thread linked has the information you need. Read me on the Github page for WikiTeam's dump generator.
- WikiTeam: We archive wikis, from Wikipedia to tiniest wikis
- PSA: Fandom has acquired GameSpot, GameFAQ’s, metacritic and more.
-
Best way to archive a wiki "Powered by MediaWiki"
ArchiveTeam WikiTeam has download tooling: https://github.com/WikiTeam/wikiteam
-
Archiving Wiki (Fandom) Pages
Hi all - I'm trying to archive a number of fandom pages. Upon checking out this subreddit, I've found a few ways of doing so, and am currently working with the WikiTeam python tool (https://github.com/WikiTeam/wikiteam)
-
[Censorship] Fandom Wiki (formerly Wikia) is deleting wikis on sexual topics November 24, such as the Monster Girl Encyclopedia wiki
Httrack is a good choice for having a local copy of the wiki you can browse personally, but note that if you ever have to back up a wiki in a formal suitable for migrating to another wiki site, something like ArchiveTeam's WikiTeam tool would be suitable. It also has a built-in tool to upload the resulting backup to archive.org, like how someone has done so with the MGQ wiki here.
-
Fandom Wiki (formerly Wikia) is deleting wikis on sexual topics in 2 weeks
I found ArchiveTeam's WikiTeam tool relatively easy to use. I just had to download the repository from github (from the Code: Download Zip in the top right), have Python installed, open a command prompt in the folder, copy-paste the commands from their front page, have it fail complaining about missing modules, look up the command to install Python modules, and install the modules it needs. Their tutorial has additional instructions for uploading the resulting archives to archive.org and for downloading lists of wikis.
-
I need help with WikiTeam
If anyone has used this app please help me. I have followed the instruction in the readme.txt https://github.com/WikiTeam/wikiteam and I have the dumpgenerator.py but, when I run it with this commands:
What are some alternatives?
diskimageprocessor - Tool for automated processing of disk images in BitCurator
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
warcprox - WARC writing MITM HTTP/S proxy
webscrapbook - A browser extension that captures web pages to local device or backend server for future retrieval, organization, annotation, and edit. This project inherits from legacy Firefox add-on ScrapBook X.
proxy.py - 💫 Ngrok Alternative • ⚡ Fast • 🪶 Lightweight • 0️⃣ Dependency • 🔌 Pluggable • 😈 TLS interception • 🔒 DNS-over-HTTPS • 🔥 Poor Man's VPN • ⏪ Reverse & ⏩ Forward • 👮🏿 "Proxy Server" framework • 🌐 "Web Server" framework • ➵ ➶ ➷ ➠ "PubSub" framework • 👷 "Work" acceptor & executor framework
reddit-save - A Python tool for backing up your saved and upvoted posts on reddit to your computer.
tor-proxy - Run your any python service over tor using tor-proxy
http-proxy-list - It is a lightweight project that, every 10 minutes, scrapes lots of free-proxy sites, validates if it works, and serves a clean proxy list. [GET https://api.github.com/repos/mertguvencli/http-proxy-list: 403 - Repository access blocked]
rexport - Reddit takeout: export your account data as JSON: comments, submissions, upvotes etc. 🦖
bitwarden-to-keepass - Export (most of) your Bitwarden items into KeePass (kdbx) database. That includes logins - with TOTP seeds, URIs, custom fields, attachments and secure notes
mwparserfromhell - A Python parser for MediaWiki wikicode