scaling-to-distributed-crawling
celery
Our great sponsors
scaling-to-distributed-crawling | celery | |
---|---|---|
5 | 43 | |
36 | 23,498 | |
- | 1.6% | |
0.0 | 9.5 | |
over 2 years ago | 4 days ago | |
HTML | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
scaling-to-distributed-crawling
-
DOs and DON'Ts of Web Scraping
We published a repository and blog post about distributed crawling in Python. It is a bit more complicated than what we've seen so far. It uses external software (Celery for asynchronous task queue and Redis as the database).
- Mastering Web Scraping in Python: Scaling to Distributed Crawling - ZenRows
- Mastering Web Scraping in Python: Scaling to Distributed Crawling – ZenRows
-
Mastering Web Scraping in Python: Scaling to Distributed Crawling
We will start to separate concepts before the project grows. We already have two files: tasks.py and main.py. We will create another two to host crawler-related functions (crawler.py) and database access (repo.py). Please look at the snippet below for the repo file, it is not complete, but you get the idea. There is a GitHub repository with the final content in case you want to check it.
celery
-
Streaming responses to websockets with multiple LLMs, am I going about this wrong?
So this might be my understanding, but stuff like celery is more like an orchestrator that chunks up workloads (think Hadoop with multiple nodes).
-
Examples of using task scheduler with Go?
In the Django world, you'd probably rely on Celery to do this for you. You're probably looking for something similar that works with Go. https://github.com/celery/celery
- SynchronousOnlyOperation from celery task using gevent execution pool on django orm
-
FastAPI + Celery problem: Celery task is still getting exectued even though I'm raising an exception on task_prerun
I've been doing some research and there doesn't seem to be much information on this issue, aditionally there's this but without a fix yet or any workaround: https://github.com/celery/celery/issues/7792
-
Taskiq: async celery alternative
RabbitMQ Classic mirror queues are very fragile to network partitioning. They are deprecated in favor of Quorum queues, but Celery doesn't support them yet : https://github.com/celery/celery/issues/6067
-
Use Celery with any Django Storage as a Result Backend
The Celery package provides some number of (undocumented!) result backends to store task results in different local, network, and cloud storages. The django-celery-result package adds options to use Django-specific ORM-based result storage, as well as Django-specific cache subsystem.
-
Django Styleguide
I spent 3 years building a high scale crawler on top of Celery.
I can't recommend it. We found many bugs in the more advanced features of Celery (like Canvas) we also ran into some really weird issues like tasks getting duplicated for no reason [1].
The most concerning problem is that the project was abandoned. The original creator is not working on it anymore and all issues that we raised were ignored. We had to fork the project and apply our own fixes to it. This was 4 years ago so maybe things improved since them.
Celery is also extremely complex.
I would recommend https://dramatiq.io/ instead.
[1]: https://github.com/celery/celery/issues/4426
-
Processing input and letting user download the result
You can use celery to process the file for extraction, saving and creating rar/zip.
-
RQ-Scheduler for tasks in far future?
Celery not usefull for long term future tasks (far future) · Issue #4522 · celery/celery (github.com)
What are some alternatives?
colly - Elegant Scraper and Crawler Framework for Golang
dramatiq - A fast and reliable background task processing library for Python 3.
Scrapy - Scrapy, a fast high-level web crawling & scraping framework for Python.
Apache Kafka - Mirror of Apache Kafka
Redis - Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.
huey - a little task queue for python
newspaper - newspaper3k is a news, full-text, and article metadata extraction in Python 3. Advanced docs:
NATS - High-Performance server for NATS.io, the cloud and edge native messaging system.
PeARS-orchard - This is the development version of PeARS, the people's search engine. More compact but less robust than PeARS-lite. If you just want to use PeARS as a local indexer, use PeARS-lite instead.
rq - Simple job queues for Python
storm-crawler - A scalable, mature and versatile web crawler based on Apache Storm
kombu - Messaging library for Python.