scraper
Scrapy
Our great sponsors
scraper | Scrapy | |
---|---|---|
12 | 180 | |
98 | 50,896 | |
- | 1.2% | |
0.0 | 9.6 | |
about 1 year ago | 5 days ago | |
TypeScript | Python | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
scraper
- Most Used Individual JavaScript Libraries - jQuery still leads
-
Most Used JavaScript Libraries (percentage) - June 2022 [OC]
Additional info and source code for generating the dataset, summarizing it and rendering the chart are available at https://github.com/get-set-fetch/scraper/tree/main/datasets/javascript-libs-from-top-1mm-sites
-
How to collaborate on web scraping?
Store the scrape progress (to-be-scraped / in-progress / scraped / in-error URLs) in a database shared by all participants and scrape in parallel with as many machines as the db load permits. Got a connection timeout / IP is blocked on one machine ? Update the scrape status for the corresponding URL and let another machine retry it. https://github.com/get-set-fetch/scraper (written in typescript) follows this idea. Using Terraform from a simple config file you can adjust the number of scraper instances to be deployed in cloud at startup and during the scraping process. In benchmarks a PostgreSQL server running on a DigitalOcean vm with 4vCPU, 8GB memory allows for ~2000 URLs to be scraped per second (synthetic data, no external traffic). From my own experience this is almost never the bottleneck. Obeying robots.txt crawl-delay will surely put you under this limit. Disclaimer: I'm the npm package author.
-
How to serve scrapped data?
Written in typescript https://github.com/get-set-fetch/scraper stores scraped content directly in a database (sqlite, mysql, postgresql). Each URL represents a Resource. You can implement your own IResourceStorage and define the exact db columns you need.
-
How to scrape entire blogs with content?
You can use https://github.com/get-set-fetch/scraper with a custom plugin based on the mozilla/readability as detailed in https://getsetfetch.org/node/custom-plugins.html (extracting news article content). I think it's a close match to your use case.
-
A simple solution to rotate proxies or how to spin up your own rotation proxy server with Puppeteer and only a few lines of JS code
I'm currently implementing concurrency conditions at project/proxy/domain/session level in https://github.com/get-set-fetch/scraper . On each level you can define the maximum number of requests and the delay between two consecutive requests.
-
Web scraping content into postgresql? Scheduling web scrapers into a pipeline with airflow?
If you're familiar with nodejs give https://github.com/get-set-fetch/scraper a try. Scraped content can be stored in sqlite, mysql or postgresql. It also supports puppeteer, playwright, cheerio or jsdom for the actual content extraction. No scheduler though.
-
Web Scraping 101 with Python
I'm using this exact strategy to scrape content directly from DOM using APIs like document.querySelectorAll. You can use the same code in both headless browser clients like Puppeteer or Playwright and DOM clients like cheerio or jsdom (assuming you have a wrapper over document API). Depending on the way a web page was fetched (opened in a browser tab or fetched via nodejs http/https requests), ExtractHtmlContentPlugin, ExtractUrlsPlugin use different DOM wrappers (native, cheerio, jsdom) to scrape the content.
[1] https://github.com/get-set-fetch/scraper/blob/main/src/plugi...
-
What is your “I don't care if this succeeds” project?
https://github.com/get-set-fetch/scraper - I've been working (intermittently :) ) on a nodejs or browser extension scraper for the last 3 years, see the other projects under the get-set-fetch umbrella. Putting a lot more effort lately as I really want to do those Alexa top 1 million analysis like top js libraries, certificate authorities and so on. A few weeks back I've posted on Show:HN as you can do basic/intermediate? scraping with it.
Not capable of handling 1 mil+ pages as it still limited to puppeteer or playwright. Working on adding cheerio/jsdom support right now.
Scrapy
- Scrapy: A Fast and Powerful Scraping and Web Crawling Framework
-
Seven Python Projects to Elevate Your Coding Skills
BeautifulSoup4 Scrapy
-
What is SERP? Meaning, Use Cases and Approaches
While there is no specific library for SERP, there are some web scraping libraries that can do the Google Search Page Ranking. One of them which is quite famous is Scrapy - It is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It offers rich developer community support and has been used by more than 50+ projects.
-
Creating an advanced search engine with PostgreSQL
If you're looking for a turn-key solution, I'd have to dig a little. I generally write a scraper in python that dumps into a database or flat file (depending on number of records I'm hunting).
Scraping is a separate subject, but once you write one you can generally reuse relevant portions for many others. If you can get adept at a scraping framework like Scrapy you can do it fairly quickly, but there aren't many tools that work out of the box for every site you'll encounter.
Once you've written the spider, it's generally able to be rerun for updates unless the site code is dramatically altered. It really comes down to how brittle the spider is coded (i.e. hunting for specific heading sizes or fonts or something) instead of grabbing the underlying JSON/XHR that doesn't usually change frequently.
1. https://scrapy.org
- Turning webpages into pdf
-
Implementing case sensitive headers in Scrapy (not through `_caseMappings`)
Scrapy capitalizes headers for request
- Dicas para projetos usando web scraping
-
Best tools to use for web scraping ??
Scrapy is a web scraping toolkit
-
What do .NET devs use for web scraping these days?
I know this might not be a good answer, as it's not .NET, but we use https://scrapy.org/ (Python).
- I'm using python to scrape web page content and extract keywords, how can I make it faster to process?
What are some alternatives?
puppeteer-cluster - Puppeteer Pool, run a cluster of instances in parallel
requests-html - Pythonic HTML Parsing for Humans™
playwright-recaptcha-solver - ReCaptcha V2 solver for Playwright
pyspider - A Powerful Spider(Web Crawler) System in Python.
playwright-python - Python version of the Playwright testing and automation library.
colly - Elegant Scraper and Crawler Framework for Golang
pyppeteer - Headless chrome/chromium automation library (unofficial port of puppeteer)
MechanicalSoup - A Python library for automating interaction with websites.
Twitch-Drops-Bot - A Node.js bot that will automatically watch Twitch streams and claim drop rewards.
vopono - Run applications through VPN tunnels with temporary network namespaces
undetected-chromedriver - Custom Selenium Chromedriver | Zero-Config | Passes ALL bot mitigation systems (like Distil / Imperva/ Datadadome / CloudFlare IUAM)