browsertrix-crawler
wiktextract
Our great sponsors
browsertrix-crawler | wiktextract | |
---|---|---|
13 | 7 | |
540 | 702 | |
6.9% | - | |
9.1 | 9.8 | |
4 days ago | 4 days ago | |
TypeScript | Python | |
GNU Affero General Public License v3.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
browsertrix-crawler
-
Webrecorder: Capture interactive websites and replay them at a later time
(Disclaimer: I work at Webrecorder)
Our automated crawler browsertrix-crawler (https://github.com/webrecorder/browsertrix-crawler) uses Puppeteer to run browsers that we archive in by loading pages, running behaviors such as auto-scroll, and then record the request/response traffic. We have some custom behavior for some social media and video sites to make sure that content is appropriate captured. It is a bit of a cat-and-mouse game as we have to continue to update these behaviors as sites change, but for the most part it works pretty well.
The trickier part is in replaying the archived websites, as a certain amount of re-writing has to happen in order to make sure the HTML and JS are working with archived assets rather than the live web. One implementation of this is replayweb.page (https://github.com/webrecorder/replayweb.page), which does all of the rewriting client-side in the browser. This sets you interact with archived websites in WARC or WACZ format as if interacting with the original site.
-
Come back, c2.com, we still need you
I use browsertrix-crawler[0] for crawling and it does well on JS heavy sites since it uses a real browser to request pages. Even has options to load browser profiles so you can crawl while being authenticated on sites.
[0] https://github.com/webrecorder/browsertrix-crawler
-
Alternative to HTTrack (website copier) as of 2023?
I have started using the tools from https://webrecorder.net like Browsertrix Crawler and they have been working great. The web archive format is open source and very portable. The crawler even crawls and saves YouTube videos embedded on pages which is awesome.
-
Halomaps, which has been the main hub for Halo modding content for almost 20 years, is having it's forums shut down on Feb 1st. A massive amount of content will be lost if it's not archived.
This looks like a good candidate for https://github.com/webrecorder/browsertrix-crawler.
- Offline Internet Archive
- Options to backup https://trythatsoap.com/?
- How to Download All of Wikipedia onto a USB Flash Drive
-
Ask HN: Best approaches to archiving interactive web journalism/writing
I just learned about this organization, Saving Ukrainian Cultural Heritage Online (SUCHO): https://www.sucho.org/
They seem to be using various tools, like Browsertrix: https://github.com/webrecorder/browsertrix-crawler
It sounds promising for interactive sites:
> Support for custom browser behaviors, using Browsertix Behaviors including autoscroll, video autoplay and site-specific behaviors
Browsertrix links to https://replayweb.page/ for a way to view an archived site.
-
How is ArchiveBox?
If you need more advanced recursive spider/crawling ability beyond --depth=1, check out Browsertrix, Photon, or Scrapy and pipe the outputted URLs into ArchiveBox.
-
Looking for suggestions for archiving Google Groups
I recommend this: https://github.com/webrecorder/browsertrix-crawler
wiktextract
- Wiktionary dump file parser and multilingual data extractor
- How to Download All of Wikipedia onto a USB Flash Drive
-
I built a dictionary app even with more than and300 apps available at AppStore
Great work
I'm working on similar dictionary app and found wiktionary insanely usable as dictionary source.
Here is one more project aiming to make wiktionary data usable as json data structure: https://github.com/tatuylonen/wiktextract.
It has a link to a site https://kaikki.org/ which hosts dictionary data dumps.
-
Dynamically generating minimal pair decks for Anki
Hm, that would be a good idea... if I didn't have to download so much data (over 20GB for just audio?!). But, looking at the Python library that processed those dumps (https://github.com/tatuylonen/wiktextract), which is more manageable, using it would involve getting the WikiMedia dump file for every word on the list, then parsing them for the relevant data and what follows is mostly the same, except I end up with a bunch of cached files.
-
What are some of the best digital free dictionaries available online (even for commercial use)?
Many parsers are available. https://github.com/tatuylonen/wiktextract
-
Best Approach to importing a languages dictionary
I'd probably try pulling from Wiktionary, there looks to be a Python package that can do it here.
-
This is not perfect but it's a start
And the json is built with https://github.com/tatuylonen/wiktextract whom I have to thank
What are some alternatives?
grab-site - The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns
WiktionaryParser - A Python Wiktionary Parser
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
Maat - Validation and transformation library powered by deductive ascending parser. Made to be extended for any kind of project.
Photon - Incredibly fast crawler designed for OSINT.
trankit - Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing
remodeling - The original wiki rewritten as a single page application
laserembeddings - LASER multilingual sentence embeddings as a pip package
replayweb.page - Serverless replay of web archives directly in the browser
zim-tools - Various ZIM command line tools
Kotoba - Quickly search the built-in iOS dictionary to see definitions of words. Collect words you want to remember.