Scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python. (by scrapy)
feedparser
Parse feeds in Python (by kurtmckee)
Scrapy | feedparser | |
---|---|---|
189 | 7 | |
57,527 | 2,153 | |
3.7% | 1.3% | |
9.7 | 7.2 | |
5 days ago | 7 days ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Scrapy
Posts with mentions or reviews of Scrapy.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2025-01-16.
- Scrapy needs to have sane defaults that do no harm
-
Top 10 Tools for Efficient Web Scraping in 2025
Scrapy is a robust and scalable open-source web crawling framework. It is highly efficient for large-scale projects and supports asynchronous scraping.
-
11 best open-source web crawlers and scrapers in 2024
Language: Python | GitHub: 52.9k stars | link
-
Current problems and mistakes of web scraping in Python and tricks to solve them!
One might ask, what about Scrapy? I'll be honest: I don't really keep up with their updates. But I haven't heard about Zyte doing anything to bypass TLS fingerprinting. So out of the box Scrapy will also be blocked, but nothing is stopping you from using curl_cffi in your Scrapy Spider.
- Scrapy, a fast high-level web crawling and scraping framework for Python
-
Automate Spider Creation in Scrapy with Jinja2 and JSON
Install scrapy (Offical website) either using pip or conda (Follow for detailed instructions):
-
Analyzing Svenskalag Data using DBT and DuckDB
Using Scrapy I fetched the data needed (activities and attendance). Scrapy handled authentication using a form request in a very simple way:
-
Scrapy Vs. Crawlee
Scrapy is an open-source Python-based web scraping framework that extracts data from websites. With Scrapy, you create spiders, which are autonomous scripts to download and process web content. The limitation of Scrapy is that it does not work very well with JavaScript rendered websites, as it was designed for static HTML pages. We will do a comparison later in the article about this.
- Claude is now available in Europe
- Scrapy: A Fast and Powerful Scraping and Web Crawling Framework
feedparser
Posts with mentions or reviews of feedparser.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-11-12.
-
What I Wish Someone Told Me About Postgres
i am using the feedparser library in python https://github.com/kurtmckee/feedparser/ which basically takes an RSS url and standardizes it to a reasonable extent. But I have noticed that different websites still get parsed slightly differently. For example look at how https://beincrypto.com/feed/ has a long description (containing actual HTML) inside but this website https://www.coindesk.com/arc/outboundfeeds/rss/ completely cuts the description out. I have about 50 such websites and they all have slight variations. So you are saying that in addition to storing parsed data (title, summary, content, author, pubdate, link, guid) that I currently store, I should also add an xml column and store the raw from each url till I get a good hang of how each site differs?
-
RSS can be used to distribute all sorts of information
There is JSON Feed¹ already. One of the spec writers is behind micro.blog, which is the first place I saw it(and also one of the few places I've seen it). I don't think it is a bad idea, and it doesn't take all that long to implement it.
I have long hoped it would pick up with the JSON-ify everything crowd, just so I'd never see a non-Atom feed again. We perhaps wouldn't need sooo much of the magic that is wrapped up in packages like feedparser² to deal with all the brokeness of RSS in the wild then.
¹ https://www.jsonfeed.org/
² https://github.com/kurtmckee/feedparser
-
Help! trying to use scraping for my dissertation but I am clueless
What sites did you try? Looked into RSS yet? Many sites have RSS feeds you can use with something like https://github.com/kurtmckee/feedparser nytimes.com feeds: https://www.nytimes.com/rss
-
Newb learning GitHub & Python. Projects?
feedparser
-
Python Library to scrape RSS-Feeds from waybackmachine?
You can explore FeedParser too
-
looking for a project
feedparser is a python package receiving and parsing RSS/Atom newsfeeds. The maintainer is active but really need much more support.
-
Consulta de un Novato absoluto
Lo más sencillo que conozco para monitorizar canales de YouTube son los feeds RSS que tiene cada canal. El formato es https://www.youtube.com/feeds/videos.xml?channel_id=[CHANNEL_ID]. Si no conoces RSS, echa un vistazo en la wiki. Para leer RSSs en Python tienes feedparser (y seguramente muchas más).
What are some alternatives?
When comparing Scrapy and feedparser you can also consider the following projects:
requests-html - Pythonic HTML Parsing for Humans™
pyspider - A Powerful Spider(Web Crawler) System in Python.
MechanicalSoup - A Python library for automating interaction with websites.