requests-html
Pythonic HTML Parsing for Humans™ (by psf)
Scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python. (by scrapy)
requests-html | Scrapy | |
---|---|---|
14 | 189 | |
13,806 | 54,997 | |
0.2% | 1.2% | |
0.0 | 9.6 | |
about 1 year ago | 2 days ago | |
Python | Python | |
MIT License | BSD 3-clause "New" or "Revised" License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
requests-html
Posts with mentions or reviews of requests-html.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-02-13.
- will requests-html library work as selenium
-
8 Most Popular Python HTML Web Scraping Packages with Benchmarks
requests-html
-
How to batch scrape Wall Street Journal (WSJ)'s Financial Ratios Data?
Ya, thanks for advice. When using requests_html library, I am trying to lower down the speed using response.html.render(timeout=1000), but it raise Runtime error instead on Google Colab: https://github.com/psf/requests-html/issues/517.
- Note, the first time you ever run the render() method, it will download Chromium into your home directory (e.g. ~/.pyppeteer/). This only happens once.
-
Data scraping tools
For dynamic js, prefer requests-html with xpath selection.
-
Which string to lower case method to you use?
Example: requests-html which has a rather exhaustive README.md, but their dedicated page is not that helpful, if I remember correctly, and currently the domain is suspended.
-
Top python libraries/ frameworks that you suggest every one
When it comes to web scraping, the usual people recommend is beautifulsoup, lxml, or selenium. But I highly recommend people check out requests-html also. Its a library that is a happy medium between ease of use as in beautifulsoup and also good enough to be used for dynamic, javascript data where it would be overkill to use a browser emulator like selenium.
- How to make all https traffic in program go through a specific proxy?
-
Requests_html not working?
Quite possible. If you look at requests-html source code, it is simply one single python file that acts as a wrapper around a bunch of other packages, like requests, chromium, parse, lxml, etc., plus a couple convenience functions. So it could easily be some sort of bad dependency resolution.
-
Web Scraping in a professional setting: Selenium vs. BeautifulSoup
What I do is try to see if I can use requests_html first before trying selenium. requests_html is usually enough if I dont need to interact with browser widgets or if the authentication isnt too difficult to reverse engineer.
Scrapy
Posts with mentions or reviews of Scrapy.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2025-01-16.
- Scrapy needs to have sane defaults that do no harm
-
Top 10 Tools for Efficient Web Scraping in 2025
Scrapy is a robust and scalable open-source web crawling framework. It is highly efficient for large-scale projects and supports asynchronous scraping.
-
11 best open-source web crawlers and scrapers in 2024
Language: Python | GitHub: 52.9k stars | link
-
Current problems and mistakes of web scraping in Python and tricks to solve them!
One might ask, what about Scrapy? I'll be honest: I don't really keep up with their updates. But I haven't heard about Zyte doing anything to bypass TLS fingerprinting. So out of the box Scrapy will also be blocked, but nothing is stopping you from using curl_cffi in your Scrapy Spider.
- Scrapy, a fast high-level web crawling and scraping framework for Python
-
Automate Spider Creation in Scrapy with Jinja2 and JSON
Install scrapy (Offical website) either using pip or conda (Follow for detailed instructions):
-
Analyzing Svenskalag Data using DBT and DuckDB
Using Scrapy I fetched the data needed (activities and attendance). Scrapy handled authentication using a form request in a very simple way:
-
Scrapy Vs. Crawlee
Scrapy is an open-source Python-based web scraping framework that extracts data from websites. With Scrapy, you create spiders, which are autonomous scripts to download and process web content. The limitation of Scrapy is that it does not work very well with JavaScript rendered websites, as it was designed for static HTML pages. We will do a comparison later in the article about this.
- Claude is now available in Europe
- Scrapy: A Fast and Powerful Scraping and Web Crawling Framework
What are some alternatives?
When comparing requests-html and Scrapy you can also consider the following projects:
feedparser - Parse feeds in Python
pyspider - A Powerful Spider(Web Crawler) System in Python.
MechanicalSoup - A Python library for automating interaction with websites.
RoboBrowser