scraper

Nodejs web scraper. Contains a command line, docker container, terraform module and ansible roles for distributed cloud scraping. Supported databases: SQLite, MySQL, PostgreSQL. Supported headless clients: Puppeteer, Playwright, Cheerio, JSdom. (by get-set-fetch)

Scraper Alternatives

Similar projects and alternatives to scraper

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better scraper alternative or higher similarity.

scraper reviews and mentions

Posts with mentions or reviews of scraper. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-08.
  • Most Used JavaScript Libraries (percentage) - June 2022 [OC]
    2 projects | /r/dataisbeautiful | 8 Jun 2022
    Additional info and source code for generating the dataset, summarizing it and rendering the chart are available at https://github.com/get-set-fetch/scraper/tree/main/datasets/javascript-libs-from-top-1mm-sites
    2 projects | /r/dataisbeautiful | 8 Jun 2022
  • How to collaborate on web scraping?
    2 projects | /r/webscraping | 1 May 2022
    Store the scrape progress (to-be-scraped / in-progress / scraped / in-error URLs) in a database shared by all participants and scrape in parallel with as many machines as the db load permits. Got a connection timeout / IP is blocked on one machine ? Update the scrape status for the corresponding URL and let another machine retry it. https://github.com/get-set-fetch/scraper (written in typescript) follows this idea. Using Terraform from a simple config file you can adjust the number of scraper instances to be deployed in cloud at startup and during the scraping process. In benchmarks a PostgreSQL server running on a DigitalOcean vm with 4vCPU, 8GB memory allows for ~2000 URLs to be scraped per second (synthetic data, no external traffic). From my own experience this is almost never the bottleneck. Obeying robots.txt crawl-delay will surely put you under this limit. Disclaimer: I'm the npm package author.
  • How to serve scrapped data?
    3 projects | /r/webscraping | 25 Apr 2022
    Written in typescript https://github.com/get-set-fetch/scraper stores scraped content directly in a database (sqlite, mysql, postgresql). Each URL represents a Resource. You can implement your own IResourceStorage and define the exact db columns you need.
  • How to scrape entire blogs with content?
    3 projects | /r/webscraping | 6 Dec 2021
    You can use https://github.com/get-set-fetch/scraper with a custom plugin based on the mozilla/readability as detailed in https://getsetfetch.org/node/custom-plugins.html (extracting news article content). I think it's a close match to your use case.
  • Web Scraping 101 with Python
    5 projects | news.ycombinator.com | 10 Feb 2021
    I'm using this exact strategy to scrape content directly from DOM using APIs like document.querySelectorAll. You can use the same code in both headless browser clients like Puppeteer or Playwright and DOM clients like cheerio or jsdom (assuming you have a wrapper over document API). Depending on the way a web page was fetched (opened in a browser tab or fetched via nodejs http/https requests), ExtractHtmlContentPlugin, ExtractUrlsPlugin use different DOM wrappers (native, cheerio, jsdom) to scrape the content.

    [1] https://github.com/get-set-fetch/scraper/blob/main/src/plugi...

  • What is your “I don't care if this succeeds” project?
    42 projects | news.ycombinator.com | 1 Feb 2021
    https://github.com/get-set-fetch/scraper - I've been working (intermittently :) ) on a nodejs or browser extension scraper for the last 3 years, see the other projects under the get-set-fetch umbrella. Putting a lot more effort lately as I really want to do those Alexa top 1 million analysis like top js libraries, certificate authorities and so on. A few weeks back I've posted on Show:HN as you can do basic/intermediate? scraping with it.

    Not capable of handling 1 mil+ pages as it still limited to puppeteer or playwright. Working on adding cheerio/jsdom support right now.

  • A note from our sponsor - InfluxDB
    www.influxdata.com | 29 Mar 2024
    Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →

Stats

Basic scraper repo stats
12
98
0.0
about 1 year ago
The modern identity platform for B2B SaaS
The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
workos.com