mwoffliner VS scraper

Compare mwoffliner vs scraper and see what are their differences.

mwoffliner

Mediawiki scraper: all your wiki articles in one highly compressed ZIM file (by openzim)

scraper

Nodejs web scraper. Contains a command line, docker container, terraform module and ansible roles for distributed cloud scraping. Supported databases: SQLite, MySQL, PostgreSQL. Supported headless clients: Puppeteer, Playwright, Cheerio, JSdom. (by get-set-fetch)
SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
surveyjs.io
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
mwoffliner scraper
7 12
256 98
1.2% -
9.2 0.0
21 days ago about 1 year ago
TypeScript TypeScript
GNU General Public License v3.0 only MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

mwoffliner

Posts with mentions or reviews of mwoffliner. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-25.
  • Wiktionary doesn’t support tables
    1 project | /r/Kiwix | 15 Jul 2023
    You can also directly open a ticket at https://github.com/openzim/mwoffliner/issues with as much info as possible so we can look into it (zim name, language, date, article name, etc.)
  • Recent Wiktionary ZIM files don't show a search bar
    3 projects | /r/Kiwix | 25 Apr 2023
    Welp yes, that's a bug (likely a regression from a recent update). Can you please open a ticket at https://github.com/openzim/zim-requests/issues (we might move it later on but that's as good a starting place as can be).
  • Latest Wikipedia zim dump (97 GB) is available for download
    5 projects | /r/DataHoarder | 20 Feb 2023
    https://github.com/openzim/mwoffliner/issues/1655 Unfortunate that there's no way to convert the easiest way to make proper dumps of wikis (ArchiveTeam's wikiteam-tools) to Kiwix Zims. That would allow for all sorts of niche information to be preserved in a readable way.
  • What's the "best" way to make your own ZIMs (in docker)?
    2 projects | /r/Kiwix | 21 Oct 2022
    I'm looking at making my own ZIM though not sure the best way to go about it. I've seen zimit on Github and the mwoffliner on Github too.
  • Self made ZIM-File only contains [object object]
    1 project | /r/Kiwix | 1 Jan 2022
    Generally speaking, I'd advise opening a ticket on https://github.com/openzim/mwoffliner/issues
  • Creating ZIM files for Kiwix by myself?
    3 projects | /r/DataHoarder | 28 Oct 2021
    r/kiwix would be the place to ask, but at the end of the day it all comes down to heading out to openzim.org (or the corresponding github repo) and figuring it out. You can either grab zimit and run it locally, or access all the libraries that will help you build your own scraper (Nautilus will assemble documents and videos into a single file library, MWoffliner will do for wikis, youtube will do YouTube, etc.).

scraper

Posts with mentions or reviews of scraper. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-08.
  • Most Used Individual JavaScript Libraries - jQuery still leads
    1 project | /r/javascript | 14 Jun 2022
  • Most Used JavaScript Libraries (percentage) - June 2022 [OC]
    1 project | /r/dataisbeautiful | 8 Jun 2022
    2 projects | /r/dataisbeautiful | 8 Jun 2022
    Additional info and source code for generating the dataset, summarizing it and rendering the chart are available at https://github.com/get-set-fetch/scraper/tree/main/datasets/javascript-libs-from-top-1mm-sites
  • How to collaborate on web scraping?
    2 projects | /r/webscraping | 1 May 2022
    Store the scrape progress (to-be-scraped / in-progress / scraped / in-error URLs) in a database shared by all participants and scrape in parallel with as many machines as the db load permits. Got a connection timeout / IP is blocked on one machine ? Update the scrape status for the corresponding URL and let another machine retry it. https://github.com/get-set-fetch/scraper (written in typescript) follows this idea. Using Terraform from a simple config file you can adjust the number of scraper instances to be deployed in cloud at startup and during the scraping process. In benchmarks a PostgreSQL server running on a DigitalOcean vm with 4vCPU, 8GB memory allows for ~2000 URLs to be scraped per second (synthetic data, no external traffic). From my own experience this is almost never the bottleneck. Obeying robots.txt crawl-delay will surely put you under this limit. Disclaimer: I'm the npm package author.
  • How to serve scrapped data?
    3 projects | /r/webscraping | 25 Apr 2022
    Written in typescript https://github.com/get-set-fetch/scraper stores scraped content directly in a database (sqlite, mysql, postgresql). Each URL represents a Resource. You can implement your own IResourceStorage and define the exact db columns you need.
  • How to scrape entire blogs with content?
    3 projects | /r/webscraping | 6 Dec 2021
    You can use https://github.com/get-set-fetch/scraper with a custom plugin based on the mozilla/readability as detailed in https://getsetfetch.org/node/custom-plugins.html (extracting news article content). I think it's a close match to your use case.
  • A simple solution to rotate proxies or how to spin up your own rotation proxy server with Puppeteer and only a few lines of JS code
    1 project | /r/webscraping | 5 Mar 2021
    I'm currently implementing concurrency conditions at project/proxy/domain/session level in https://github.com/get-set-fetch/scraper . On each level you can define the maximum number of requests and the delay between two consecutive requests.
  • Web scraping content into postgresql? Scheduling web scrapers into a pipeline with airflow?
    1 project | /r/webscraping | 15 Feb 2021
    If you're familiar with nodejs give https://github.com/get-set-fetch/scraper a try. Scraped content can be stored in sqlite, mysql or postgresql. It also supports puppeteer, playwright, cheerio or jsdom for the actual content extraction. No scheduler though.
  • Web Scraping 101 with Python
    5 projects | news.ycombinator.com | 10 Feb 2021
    I'm using this exact strategy to scrape content directly from DOM using APIs like document.querySelectorAll. You can use the same code in both headless browser clients like Puppeteer or Playwright and DOM clients like cheerio or jsdom (assuming you have a wrapper over document API). Depending on the way a web page was fetched (opened in a browser tab or fetched via nodejs http/https requests), ExtractHtmlContentPlugin, ExtractUrlsPlugin use different DOM wrappers (native, cheerio, jsdom) to scrape the content.

    [1] https://github.com/get-set-fetch/scraper/blob/main/src/plugi...

  • What is your “I don't care if this succeeds” project?
    42 projects | news.ycombinator.com | 1 Feb 2021
    https://github.com/get-set-fetch/scraper - I've been working (intermittently :) ) on a nodejs or browser extension scraper for the last 3 years, see the other projects under the get-set-fetch umbrella. Putting a lot more effort lately as I really want to do those Alexa top 1 million analysis like top js libraries, certificate authorities and so on. A few weeks back I've posted on Show:HN as you can do basic/intermediate? scraping with it.

    Not capable of handling 1 mil+ pages as it still limited to puppeteer or playwright. Working on adding cheerio/jsdom support right now.

What are some alternatives?

When comparing mwoffliner and scraper you can also consider the following projects:

wikipedia-mirror - 🌐 Guide and tools to run a full offline mirror of Wikipedia.org with three different approaches: Nginx caching proxy, Kiwix + ZIM dump, and MediaWiki/XOWA + XML dump

puppeteer-cluster - Puppeteer Pool, run a cluster of instances in parallel

wikiscript - wikiscript gem - scripts for wikipedia (get wikitext for page, parse tables & links, etc.)

playwright-recaptcha-solver - ReCaptcha V2 solver for Playwright

nautilus - Turns a collection of documents into a browsable ZIM file

playwright-python - Python version of the Playwright testing and automation library.

zimit - Make a ZIM file from any Web site and surf offline!

pyppeteer - Headless chrome/chromium automation library (unofficial port of puppeteer)

kiwix-tools - Command line Kiwix tools: kiwix-serve, kiwix-manage, ...

Twitch-Drops-Bot - A Node.js bot that will automatically watch Twitch streams and claim drop rewards.

libkiwix - Common code base for all Kiwix ports

vopono - Run applications through VPN tunnels with temporary network namespaces