scraper

Nodejs web scraper. Contains a command line, docker container, terraform module and ansible roles for distributed cloud scraping. Supported databases: SQLite, MySQL, PostgreSQL. Supported headless clients: Puppeteer, Playwright, Cheerio, JSdom. (by get-set-fetch)

Scraper Alternatives

Similar projects and alternatives to scraper

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better scraper alternative or higher similarity.

Suggest an alternative to scraper

scraper reviews and mentions

Posts with mentions or reviews of scraper. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-05-01.
  • How to collaborate on web scraping?
    2 projects | reddit.com/r/webscraping | 1 May 2022
    Store the scrape progress (to-be-scraped / in-progress / scraped / in-error URLs) in a database shared by all participants and scrape in parallel with as many machines as the db load permits. Got a connection timeout / IP is blocked on one machine ? Update the scrape status for the corresponding URL and let another machine retry it. https://github.com/get-set-fetch/scraper (written in typescript) follows this idea. Using Terraform from a simple config file you can adjust the number of scraper instances to be deployed in cloud at startup and during the scraping process. In benchmarks a PostgreSQL server running on a DigitalOcean vm with 4vCPU, 8GB memory allows for ~2000 URLs to be scraped per second (synthetic data, no external traffic). From my own experience this is almost never the bottleneck. Obeying robots.txt crawl-delay will surely put you under this limit. Disclaimer: I'm the npm package author.
  • How to serve scrapped data?
    3 projects | reddit.com/r/webscraping | 25 Apr 2022
    Written in typescript https://github.com/get-set-fetch/scraper stores scraped content directly in a database (sqlite, mysql, postgresql). Each URL represents a Resource. You can implement your own IResourceStorage and define the exact db columns you need.
  • How to scrape entire blogs with content?
    3 projects | reddit.com/r/webscraping | 6 Dec 2021
    You can use https://github.com/get-set-fetch/scraper with a custom plugin based on the mozilla/readability as detailed in https://getsetfetch.org/node/custom-plugins.html (extracting news article content). I think it's a close match to your use case.
  • A simple solution to rotate proxies or how to spin up your own rotation proxy server with Puppeteer and only a few lines of JS code
    1 project | reddit.com/r/webscraping | 5 Mar 2021
    I'm currently implementing concurrency conditions at project/proxy/domain/session level in https://github.com/get-set-fetch/scraper . On each level you can define the maximum number of requests and the delay between two consecutive requests.
  • Web scraping content into postgresql? Scheduling web scrapers into a pipeline with airflow?
    1 project | reddit.com/r/webscraping | 15 Feb 2021
    If you're familiar with nodejs give https://github.com/get-set-fetch/scraper a try. Scraped content can be stored in sqlite, mysql or postgresql. It also supports puppeteer, playwright, cheerio or jsdom for the actual content extraction. No scheduler though.
  • Web Scraping 101 with Python
    5 projects | news.ycombinator.com | 10 Feb 2021
    I'm using this exact strategy to scrape content directly from DOM using APIs like document.querySelectorAll. You can use the same code in both headless browser clients like Puppeteer or Playwright and DOM clients like cheerio or jsdom (assuming you have a wrapper over document API). Depending on the way a web page was fetched (opened in a browser tab or fetched via nodejs http/https requests), ExtractHtmlContentPlugin, ExtractUrlsPlugin use different DOM wrappers (native, cheerio, jsdom) to scrape the content.

    [1] https://github.com/get-set-fetch/scraper/blob/main/src/plugi...

  • What is your “I don't care if this succeeds” project?
    42 projects | news.ycombinator.com | 1 Feb 2021
    https://github.com/get-set-fetch/scraper - I've been working (intermittently :) ) on a nodejs or browser extension scraper for the last 3 years, see the other projects under the get-set-fetch umbrella. Putting a lot more effort lately as I really want to do those Alexa top 1 million analysis like top js libraries, certificate authorities and so on. A few weeks back I've posted on Show:HN as you can do basic/intermediate? scraping with it.

    Not capable of handling 1 mil+ pages as it still limited to puppeteer or playwright. Working on adding cheerio/jsdom support right now.

  • Show HN: Plugin Based, Batteries Included, Web Scraper
    1 project | news.ycombinator.com | 19 Jan 2021

Stats

Basic scraper repo stats
8
44
8.9
25 days ago

get-set-fetch/scraper is an open source project licensed under MIT License which is an OSI approved license.

Deliver Cleaner and Safer Code - Right in Your IDE of Choice!
SonarLint is a free and open source IDE extension that identifies and catches bugs and vulnerabilities as you code, directly in the IDE. Install from your favorite IDE marketplace today.
www.sonarlint.org
Find remote TypeScript jobs at our new job board 99remotejobs.com. There are 3 new remote jobs listed recently.
Are you hiring? Post a new remote job listing for free.