Sidekiq
arniesmtpbufferserver | Sidekiq | |
---|---|---|
6 | 92 | |
13 | 12,950 | |
- | 0.3% | |
2.4 | 8.9 | |
7 months ago | 8 days ago | |
Python | Ruby | |
MIT License | GNU Lesser General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
arniesmtpbufferserver
- Arnie – SMTP buffer server in – 100 lines of async Python
-
Choose Postgres Queue Technology
My guess is that many people are implementing queuing mechanisms just for sending email.
The Linux file system makes a perfectly good basis for a message queue since file moves are atomic.
You can see how this works in Arnie SMTP buffer server, a super simple queue just for emails, no database at all, just the file system.
https://github.com/bootrino/arniesmtpbufferserver
-
Things Unix can do atomically (2010)
A practical applications of atomic mv is building simple file based queuing mechanisms.
For example I wrote this SMTP buffer server which moves things to different directories as a simple form of message queue.
https://github.com/bootrino/arniesmtpbufferserver
Caveat I think this needs examination from the perspective of fsync - i.e. I suspect the code should be fsyncing at certain points but not sure.
I actually wrote (in Rust) a simple file based message queue using atomic mv. It instantly maxed out the SSD performance at about 30,000 messages/second.
-
Procrastinate: PostgreSQL-Based Task Queue for Python
Yeah I was using Celery for sending emails - nothing else.
And it was such a nightmare to configure and debug and such overkill for email buffering that in a fit of frustration I wrote the Arnie SMTP buffering server and ditched Celery.
https://github.com/bootrino/arniesmtpbufferserver
It's only 100 lines of code:
https://github.com/bootrino/arniesmtpbufferserver/blob/maste...
-
Show HN: Arnie SMTP buffer server in 100 lines of async Python
Here's the 100 lines of code:
https://github.com/bootrino/arniesmtpbufferserver/blob/master/arniesmtpbufferserver.py
Here's the github repo:
https://github.com/bootrino/arniesmtpbufferserver
It's MIT licensed.
Arnie is a server that has the single purpose of buffering outbound SMTP emails.
A typical web SAAS needs to send emails such as signup/signin/forgot password etc.
The web page code itself should not directly write this to an SMTP server. Instead they should be decoupled. There's a few reasons for this. One is, if there is an error in sending the email, then the whole thing simply falls over if that send was executed by the web page code - there's no chance to resend because the web request has completed. Also, execution of an SMTP request by a web page slows the response time down of that page, whilst the code goes through the process of connecting to the server and sending the email. So when you send SMTP email from your web application, the most performant and safest way to do it is to buffer them for sending. The buffering server will then queue them and send them and handle things like retries if the target SMTP server is down or throttled.
There's a few ways to solve this problem - you can set up a local email server and configure it for relaying. Or in the Python world people often use Celery. Complexity is the down side of using either Celery or an email server configured for relaying - both of these solutions have many more features than needed and can be complex to configure/run/troubleshoot.
Arnie is intended for small scale usage - for example a typical web server for a simple SAAS application. Large scale email traffic would require parallel sends to the SMTP server.
Arnie sequentially sends emails - it does not attempt to send email to the SMTP server in parallel. It probably could do fairly easily by spawning email send tasks, but SMTP parallelisation was not the goal in writing Arnie.
- Arnie - SMTP buffer server in ~ 100 lines of async Python
Sidekiq
-
Hanami and HTMX - progress bar
Hi there! I want to show off a little feature I made using hanami, htmx and a little bit of redis + sidekiq.
-
solid_queue alternatives - Sidekiq and good_job
3 projects | 21 Apr 2024
I'd say Sidekiq is the top competitor here.
-
Valkey Is Rapidly Overtaking Redis
There's something wrong at Redislabs, it took them over a year to get RESP3 rolled out into their hosted service, you'd expect a rollout of that to be a bit quicker when they're the owner of Redis.
It affected us when upgrading Sidekiq to version 7, which dropped support for older Redis, and their Envoy proxy setup didn't support HELLO and RESP3: https://github.com/sidekiq/sidekiq/issues/5594
-
Redis Re-Implemented with SQLite
That depends on how the `maxmemory-policy` is configured, and queue systems based on Redis will tell you not to allow eviction. https://github.com/sidekiq/sidekiq/wiki/Using-Redis#memory (it even logs a warnings if it detects your Redis is misconfigured IIRC).
-
3 one-person million dollar online businesses
Sidekiq https://sidekiq.org/: This one started as an open source project, once it got enough traction, the developer made a premium version of it, and makes money by selling licenses to businesses.
-
Choose Postgres Queue Technology
Sidekiq will drop in-progress jobs when a worker crashes. Sidekiq Pro can recover those jobs but with a large delay. Sidekiq is excellent overall but it’s not suitable for processing critical jobs with a low latency guarantee.
https://github.com/sidekiq/sidekiq/wiki/Reliability
-
We built the fastest CI in the world. It failed
> I'm not sure feature withholding has traditionally worked out well in the developer space.
I think it's worked out well for Sidekiq (https://sidekiq.org). I really like their model of layering valuable features between the OSS / Pro / Enterprise licenses.
-
Exploring concurrent rate limiters, mutexes, semaphores
I was studying Sidekiq's page on rate limiters. The first type of rate limiting mentioned is the concurrent limiter: only n tasks are allowed to run at any point in time. Note that this is independent of time units (e.g. per second), or how long they take to run. The only limitation is the number of concurrent tasks/requests.
- Ask HN: What are some of the most elegant codebases in your favorite language?
- Sidekiq and managing resumable jobs?
What are some alternatives?
starqueue
Resque - Resque is a Redis-backed Ruby library for creating background jobs, placing them on multiple queues, and processing them later.
kubeblocks - KubeBlocks is an open-source control plane that runs and manages databases, message queues and other data infrastructure on K8s.
Sneakers - A fast background processing framework for Ruby and RabbitMQ
pgjobq - Atomic low latency job queues running on Postgres
Shoryuken - A super efficient Amazon SQS thread based message processor for Ruby
Sucker Punch - Sucker Punch is a Ruby asynchronous processing library using concurrent-ruby, heavily influenced by Sidekiq and girl_friday.
Apache Kafka - Mirror of Apache Kafka
celery - Distributed Task Queue (development branch)
good_job - Multithreaded, Postgres-based, Active Job backend for Ruby on Rails.
Delayed::Job - Database based asynchronous priority queue system -- Extracted from Shopify
Karafka - Ruby and Rails efficient multithreaded Kafka processing framework