good_job
arniesmtpbufferserver | good_job | |
---|---|---|
6 | 36 | |
13 | 2,453 | |
- | - | |
2.4 | 9.3 | |
7 months ago | 7 days ago | |
Python | Ruby | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
arniesmtpbufferserver
- Arnie – SMTP buffer server in – 100 lines of async Python
-
Choose Postgres Queue Technology
My guess is that many people are implementing queuing mechanisms just for sending email.
The Linux file system makes a perfectly good basis for a message queue since file moves are atomic.
You can see how this works in Arnie SMTP buffer server, a super simple queue just for emails, no database at all, just the file system.
https://github.com/bootrino/arniesmtpbufferserver
-
Things Unix can do atomically (2010)
A practical applications of atomic mv is building simple file based queuing mechanisms.
For example I wrote this SMTP buffer server which moves things to different directories as a simple form of message queue.
https://github.com/bootrino/arniesmtpbufferserver
Caveat I think this needs examination from the perspective of fsync - i.e. I suspect the code should be fsyncing at certain points but not sure.
I actually wrote (in Rust) a simple file based message queue using atomic mv. It instantly maxed out the SSD performance at about 30,000 messages/second.
-
Procrastinate: PostgreSQL-Based Task Queue for Python
Yeah I was using Celery for sending emails - nothing else.
And it was such a nightmare to configure and debug and such overkill for email buffering that in a fit of frustration I wrote the Arnie SMTP buffering server and ditched Celery.
https://github.com/bootrino/arniesmtpbufferserver
It's only 100 lines of code:
https://github.com/bootrino/arniesmtpbufferserver/blob/maste...
-
Show HN: Arnie SMTP buffer server in 100 lines of async Python
Here's the 100 lines of code:
https://github.com/bootrino/arniesmtpbufferserver/blob/master/arniesmtpbufferserver.py
Here's the github repo:
https://github.com/bootrino/arniesmtpbufferserver
It's MIT licensed.
Arnie is a server that has the single purpose of buffering outbound SMTP emails.
A typical web SAAS needs to send emails such as signup/signin/forgot password etc.
The web page code itself should not directly write this to an SMTP server. Instead they should be decoupled. There's a few reasons for this. One is, if there is an error in sending the email, then the whole thing simply falls over if that send was executed by the web page code - there's no chance to resend because the web request has completed. Also, execution of an SMTP request by a web page slows the response time down of that page, whilst the code goes through the process of connecting to the server and sending the email. So when you send SMTP email from your web application, the most performant and safest way to do it is to buffer them for sending. The buffering server will then queue them and send them and handle things like retries if the target SMTP server is down or throttled.
There's a few ways to solve this problem - you can set up a local email server and configure it for relaying. Or in the Python world people often use Celery. Complexity is the down side of using either Celery or an email server configured for relaying - both of these solutions have many more features than needed and can be complex to configure/run/troubleshoot.
Arnie is intended for small scale usage - for example a typical web server for a simple SAAS application. Large scale email traffic would require parallel sends to the SMTP server.
Arnie sequentially sends emails - it does not attempt to send email to the SMTP server in parallel. It probably could do fairly easily by spawning email send tasks, but SMTP parallelisation was not the goal in writing Arnie.
- Arnie - SMTP buffer server in ~ 100 lines of async Python
good_job
-
solid_queue alternatives - Sidekiq and good_job
3 projects | 21 Apr 2024
This is the most direct competitor of good_job in my opinion.
-
Tuning Rails application structure
Once we are done with default gems, should we look into something we usually use? That's jwt because we need session tokens for our API. Next comes our one and only sidekiq. For a long period of time it was the best in town solution for background jobs. Now we could also consider solid_queue or good_job. In development and testing groups we need rspec-rails, factory_bot_rails and ffaker. Dealing with money? Start doing it properly from the beginning! Do not forget to install money-rails. Once everything is added to the Gemfile do not forget to trigger bundle install.
-
Postgres as Queue
In the world of Ruby, GoodJob [0] has been doing a _good job_ so far.
[0] - https://github.com/bensheldon/good_job
-
Choose Postgres Queue Technology
For Rails apps, you can do this using the ActiveJob interface via
https://github.com/bensheldon/good_job
Had it in production for about a quarter and it’s worked well.
-
Pg_later: Asynchronous Queries for Postgres
Idk about pgagent but any table is a resilient queue with the multiple locks available in pg along with some SELECT pg_advisory_lock or SELECT FOR UPDATE queries, and/or LISTEN/NOTIFY.
Several bg job libs are built around native locking functionality
> Relies upon Postgres integrity, session-level Advisory Locks to provide run-once safety and stay within the limits of schema.rb, and LISTEN/NOTIFY to reduce queuing latency.
https://github.com/bensheldon/good_job
> |> lock("FOR UPDATE SKIP LOCKED")
https://github.com/sorentwo/oban/blob/8acfe4dcfb3e55bbf233aa...
-
Noticed Gem and ActionCable
The suggestion from /u/tofus is a good one. If you are already using redis as your ActionCable adapter I would use sidekiq. If not and you're using postgres I would consider https://github.com/bensheldon/good_job
-
Introducing tobox: a transactional outbox framework
Probably worth mentioning that aside from delayed_job there are at least two more modern alternatives backed by the DB: Que and good_job.
-
Sidekiq jobs in ActiveRecord transactions
Good article. Sidekiq is a good, well respected too. However if you are starting out I would recommend not using it, and instead choosing a DB based queue system. We have great success with que, but there are others like good_job.
-
Mike Perham of Sidekiq: “If you build something valuable, charge money for it.”
Sidekiq Pro is great, we're paying for it! 10k a year I think.
But for people who are interested in alternatives, I'd also suggest Good Job (runs on Postgresql).
https://github.com/bensheldon/good_job
-
SQL Maxis: Why We Ditched RabbitMQ and Replaced It with a Postgres Queue
I'm the GoodJob author. Here's the class that is responsible for implementing Postgres's LISTEN/NOTIFY functionality in GoodJob:
https://github.com/bensheldon/good_job/blob/10e9d9b714a668dc...
That's heavily inspired by Rail's Action Cable (websockets) Adapter for Postgres, which is a bit simpler and easier to understand:
https://github.com/rails/rails/blob/be287ac0d5000e667510faba...
Briefly, it spins up a background thread with a dedicated database connection and doings a blocking Postgres LISTEN query returns results, and then it forwards the result to other subscribing objects.
What are some alternatives?
starqueue
Sidekiq - Simple, efficient background processing for Ruby
kubeblocks - KubeBlocks is an open-source control plane that runs and manages databases, message queues and other data infrastructure on K8s.
sidekiq-throttled - Concurrency and rate-limit throttling for Sidekiq
pgjobq - Atomic low latency job queues running on Postgres
Que - A Ruby job queue that uses PostgreSQL's advisory locks for speed and reliability.
Delayed::Job - Database based asynchronous priority queue system -- Extracted from Shopify
Resque - Resque is a Redis-backed Ruby library for creating background jobs, placing them on multiple queues, and processing them later.
Sidekiq::Undertaker - Sidekiq::Undertaker allows exploring, reviving or burying dead jobs.
sidekiq_alive - Liveness probe for Sidekiq in Kubernetes deployments
Karafka - Ruby and Rails efficient multithreaded Kafka processing framework
worker - High performance Node.js/PostgreSQL job queue (also suitable for getting jobs generated by PostgreSQL triggers/functions out into a different work queue)