scaling-to-distributed-crawling
storm-crawler
scaling-to-distributed-crawling | storm-crawler | |
---|---|---|
5 | - | |
36 | 858 | |
- | 1.2% | |
0.0 | 8.8 | |
over 2 years ago | 8 days ago | |
HTML | HTML | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
scaling-to-distributed-crawling
-
DOs and DON'Ts of Web Scraping
We published a repository and blog post about distributed crawling in Python. It is a bit more complicated than what we've seen so far. It uses external software (Celery for asynchronous task queue and Redis as the database).
- Mastering Web Scraping in Python: Scaling to Distributed Crawling - ZenRows
- Mastering Web Scraping in Python: Scaling to Distributed Crawling – ZenRows
-
Mastering Web Scraping in Python: Scaling to Distributed Crawling
We will start to separate concepts before the project grows. We already have two files: tasks.py and main.py. We will create another two to host crawler-related functions (crawler.py) and database access (repo.py). Please look at the snippet below for the repo file, it is not complete, but you get the idea. There is a GitHub repository with the final content in case you want to check it.
storm-crawler
We haven't tracked posts mentioning storm-crawler yet.
Tracking mentions began in Dec 2020.
What are some alternatives?
celery - Distributed Task Queue (development branch)
Apache Nutch - Apache Nutch is an extensible and scalable web crawler
colly - Elegant Scraper and Crawler Framework for Golang
jsoup - jsoup: the Java HTML parser, built for HTML editing, cleaning, scraping, and XSS safety.
Scrapy - Scrapy, a fast high-level web crawling & scraping framework for Python.
Crawler4j - Open Source Web Crawler for Java
Redis - Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.
Sparkler - Spark-Crawler: Apache Nutch-like crawler that runs on Apache Spark.
newspaper - newspaper3k is a news, full-text, and article metadata extraction in Python 3. Advanced docs:
PeARS-orchard - This is the development version of PeARS, the people's search engine. More compact but less robust than PeARS-federated. If you just want to use PeARS in real life, use PeARS-federated instead.
crawlee - Crawlee—A web scraping and browser automation library for Node.js to build reliable crawlers. In JavaScript and TypeScript. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with Puppeteer, Playwright, Cheerio, JSDOM, and raw HTTP. Both headful and headless mode. With proxy rotation.