neoq
arniesmtpbufferserver | neoq | |
---|---|---|
6 | 5 | |
13 | 243 | |
- | - | |
2.4 | 8.3 | |
7 months ago | 14 days ago | |
Python | Go | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
arniesmtpbufferserver
- Arnie – SMTP buffer server in – 100 lines of async Python
-
Choose Postgres Queue Technology
My guess is that many people are implementing queuing mechanisms just for sending email.
The Linux file system makes a perfectly good basis for a message queue since file moves are atomic.
You can see how this works in Arnie SMTP buffer server, a super simple queue just for emails, no database at all, just the file system.
https://github.com/bootrino/arniesmtpbufferserver
-
Things Unix can do atomically (2010)
A practical applications of atomic mv is building simple file based queuing mechanisms.
For example I wrote this SMTP buffer server which moves things to different directories as a simple form of message queue.
https://github.com/bootrino/arniesmtpbufferserver
Caveat I think this needs examination from the perspective of fsync - i.e. I suspect the code should be fsyncing at certain points but not sure.
I actually wrote (in Rust) a simple file based message queue using atomic mv. It instantly maxed out the SSD performance at about 30,000 messages/second.
-
Procrastinate: PostgreSQL-Based Task Queue for Python
Yeah I was using Celery for sending emails - nothing else.
And it was such a nightmare to configure and debug and such overkill for email buffering that in a fit of frustration I wrote the Arnie SMTP buffering server and ditched Celery.
https://github.com/bootrino/arniesmtpbufferserver
It's only 100 lines of code:
https://github.com/bootrino/arniesmtpbufferserver/blob/maste...
-
Show HN: Arnie SMTP buffer server in 100 lines of async Python
Here's the 100 lines of code:
https://github.com/bootrino/arniesmtpbufferserver/blob/master/arniesmtpbufferserver.py
Here's the github repo:
https://github.com/bootrino/arniesmtpbufferserver
It's MIT licensed.
Arnie is a server that has the single purpose of buffering outbound SMTP emails.
A typical web SAAS needs to send emails such as signup/signin/forgot password etc.
The web page code itself should not directly write this to an SMTP server. Instead they should be decoupled. There's a few reasons for this. One is, if there is an error in sending the email, then the whole thing simply falls over if that send was executed by the web page code - there's no chance to resend because the web request has completed. Also, execution of an SMTP request by a web page slows the response time down of that page, whilst the code goes through the process of connecting to the server and sending the email. So when you send SMTP email from your web application, the most performant and safest way to do it is to buffer them for sending. The buffering server will then queue them and send them and handle things like retries if the target SMTP server is down or throttled.
There's a few ways to solve this problem - you can set up a local email server and configure it for relaying. Or in the Python world people often use Celery. Complexity is the down side of using either Celery or an email server configured for relaying - both of these solutions have many more features than needed and can be complex to configure/run/troubleshoot.
Arnie is intended for small scale usage - for example a typical web server for a simple SAAS application. Large scale email traffic would require parallel sends to the SMTP server.
Arnie sequentially sends emails - it does not attempt to send email to the SMTP server in parallel. It probably could do fairly easily by spawning email send tasks, but SMTP parallelisation was not the goal in writing Arnie.
- Arnie - SMTP buffer server in ~ 100 lines of async Python
neoq
- Show HN: Hatchet – Open-source distributed task queue
-
Choose Postgres Queue Technology
I just want to commend OP - if they’re here - for choosing an int64 for job IDs, and MD5 for hashing the payload in Neoq, the job library linked [0] from the article.
Especially given the emphasis on YAGNI, you don’t need a UUID primary key, and all of its problems they bring for B+trees (that thing RDBMS is built on), nor do you need the collision resistance of SHA256 - the odds of you creating a dupe job hash with MD5 are vanishingly small.
As to the actual topic, it’s fine IFF you carefully monitor for accumulating dead tuples, and adjust auto-vacuum for that table as necessary. While not something you’d run into at the start, at a modest scale you may start to see issues. May. You may also opt to switch to Redis or something else before that point anyway.
[0]: https://github.com/acaloiaro/neoq
-
Ask HN: Tell us about your project that's not done yet but you want feedback on
Neoq (https://github.com/acaloiaro/neoq) is a background job processor for Go.
Yes, another one. It began from my desire to have a robust Postgres-backed job processor. What I quickly realized was that the interface in front of the queue was what was really important. This allowed me to add both in-memory and Redis (provided by asynq) backends behind the same interface. Which allows dependent projects to switch between different backends in different settings/durable requirements. E.g. in-memory for testing/development, postgres when you're not running Google-scale jobs, and Redis for all the obvious use cases for a Redis-backed queue.
This allows me to swap out job queue backends without changing a line of job processor code.
I'm familiar with the theory that one shouldn't implement queues on Postgres, and to a large extent, I disagree with those theories. I'm confident you can point out a scenario in which one shouldn't, and I contend that those scenarios are the exception rather than the rule.
-
Examples of using task scheduler with Go?
I created a background processor called Neoq (https://github.com/acaloiaro/neoq) that is likely to interest you.
-
SQL Maxis: Why We Ditched RabbitMQ and Replaced It with a Postgres Queue
This is exactly the thesis behind neoq: https://github.com/acaloiaro/neoq
What are some alternatives?
starqueue
kubeblocks - KubeBlocks is an open-source control plane that runs and manages databases, message queues and other data infrastructure on K8s.
oban - 💎 Robust job processing in Elixir, backed by modern PostgreSQL and SQLite3
pgjobq - Atomic low latency job queues running on Postgres
tembo - Monorepo for Tembo Operator, Tembo Stacks, and Tembo CLI
Asynq - Simple, reliable, and efficient distributed task queue in Go
pgtt - PostgreSQL extension to create, manage and use Oracle-style Global Temporary Tables and the others RDBMS
starlark-go - Starlark in Go: the Starlark configuration language, implemented in Go
tqs - Tiny Queue Service (Server)
divedb - This is the source repository for the DiveDB site