scrapydweb VS scrapy-rotating-proxies

Compare scrapydweb vs scrapy-rotating-proxies and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
scrapydweb scrapy-rotating-proxies
6 4
3,001 705
- 0.0%
3.6 0.0
about 1 month ago almost 2 years ago
Python Python
GNU General Public License v3.0 only MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

scrapydweb

Posts with mentions or reviews of scrapydweb. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-01-14.

scrapy-rotating-proxies

Posts with mentions or reviews of scrapy-rotating-proxies. We have used some of these posts to build our list of alternatives and similar projects.
  • How do you handle CAPTCHA pages appearing in some of the rotating proxies you use?
    1 project | /r/webscraping | 13 Apr 2023
    It was the sliding CAPTCHA but I solved it by following the instructions from the library I'm using to rotate proxies to retry with a different IP when there is a CAPTCHA https://github.com/TeamHG-Memex/scrapy-rotating-proxies At the bottom if anyone is interested
  • Scrapy rotating proxies
    1 project | /r/webscraping | 1 Aug 2022
    Hi, I've been using the scrapy-rotating-proxies (https://github.com/TeamHG-Memex/scrapy-rotating-proxies) library for scrapy and I'm getting logs in my crawl of type example: "[rotating_proxies.expire] DEBUG: Proxy is DEAD. When I check and test the proxies (I'm using webshare proxies) and urls mentioned on the logs individually they work ok, so I assume it's a problem with the library, has anyone had the same issue of similar problem? (I looked for tickets reported on github but had didn't find any refering to this.
  • how does one configure webshare api key in scrapy scripts and also to use scrapy-proxy-pool?
    1 project | /r/scrapy | 21 Dec 2021
    Scrapy takes the proxy from the http_proxy/https_proxy env vars. They can include the user/password. As for pools, Scrapy itself doesn't support that, but you can use https://github.com/TeamHG-Memex/scrapy-rotating-proxies or similar addons to use them.
  • Using free proxies for a spider.
    1 project | /r/scrapy | 2 Jul 2021
    Hello, I'm looking into trying free proxies using something like in this github (https://github.com/TeamHG-Memex/scrapy-rotating-proxies/blob/master/README.rst). However, I need to find my own list of proxy IP's to use. When I look up free proxies I find plenty of options, but I'm rather new to this topic and don't know what to use. There seems to be plenty of different types, and I'm not sure if I should/shouldn't use certain proxy IP's. Any advice on the topic would be appreciated.

What are some alternatives?

When comparing scrapydweb and scrapy-rotating-proxies you can also consider the following projects:

Gerapy - Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Django and Vue.js

scrapy-playwright - 🎭 Playwright integration for Scrapy

scrapy-splash - Scrapy+Splash for JavaScript integration

scrapy-cloudflare-middleware - A Scrapy middleware to bypass the CloudFlare's anti-bot protection

SpiderKeeper - admin ui for scrapy/open source scrapinghub

SquadJS - Squad Server Script Framework

Shadowrocket-ADBlock-Rules - 提供多款 Shadowrocket 规则,带广告过滤功能。用于 iOS 未越狱设备选择性地自动翻墙。

scrapeops-scrapy-sdk - Scrapy extension that gives you all the scraping monitoring, alerting, scheduling, and data validation you will need straight out of the box.

scrapy-fake-useragent - Random User-Agent middleware based on fake-useragent

scrapy-crawl-once - Scrapy middleware which allows to crawl only new content