storm-crawler
scaling-to-distributed-crawling
storm-crawler | scaling-to-distributed-crawling | |
---|---|---|
- | 5 | |
858 | 36 | |
1.2% | - | |
8.8 | 0.0 | |
8 days ago | over 2 years ago | |
HTML | HTML | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
storm-crawler
We haven't tracked posts mentioning storm-crawler yet.
Tracking mentions began in Dec 2020.
scaling-to-distributed-crawling
-
DOs and DON'Ts of Web Scraping
We published a repository and blog post about distributed crawling in Python. It is a bit more complicated than what we've seen so far. It uses external software (Celery for asynchronous task queue and Redis as the database).
- Mastering Web Scraping in Python: Scaling to Distributed Crawling - ZenRows
- Mastering Web Scraping in Python: Scaling to Distributed Crawling – ZenRows
-
Mastering Web Scraping in Python: Scaling to Distributed Crawling
We will start to separate concepts before the project grows. We already have two files: tasks.py and main.py. We will create another two to host crawler-related functions (crawler.py) and database access (repo.py). Please look at the snippet below for the repo file, it is not complete, but you get the idea. There is a GitHub repository with the final content in case you want to check it.
What are some alternatives?
Apache Nutch - Apache Nutch is an extensible and scalable web crawler
celery - Distributed Task Queue (development branch)
jsoup - jsoup: the Java HTML parser, built for HTML editing, cleaning, scraping, and XSS safety.
colly - Elegant Scraper and Crawler Framework for Golang
Crawler4j - Open Source Web Crawler for Java
Scrapy - Scrapy, a fast high-level web crawling & scraping framework for Python.
Sparkler - Spark-Crawler: Apache Nutch-like crawler that runs on Apache Spark.
Redis - Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.