litequeue
datasette.io
litequeue | datasette.io | |
---|---|---|
3 | 6 | |
138 | 81 | |
3.6% | - | |
7.4 | 8.0 | |
about 2 months ago | 12 days ago | |
Python | HTML | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
litequeue
-
Choose Postgres Queue Technology
To make sure you that the message you are trying to retrieve hasn't been locked already by another worker.
[0]: https://github.com/litements/litequeue/
[1]: https://github.com/litements/litequeue/blob/3fece7aa9e9a31e4...
-
SQL Maxis: Why We Ditched RabbitMQ and Replaced It with a Postgres Queue
SQLite is missing some features like `SELECT FOR UPDATE`, but you can work around some issues with a few extra queries. I wrote litequeue[0] with this specific purpose. I haven't been able to use it a lot, so I don't have real-world numbers of how it scales, but the scaling limits depend on how fast you can insert into the database.
[0]: https://github.com/litements/litequeue
-
What's New in SQLite 3.35
The `RETURNING` is so awesome! I'm implementing a set of data structures on top of SQLite, one of them is a queue[0], and I had to do a transaction to lock a message and then return it, but this makes it easier.
There's one little issue I keep finding with SQLite, and it's that most virtual servers / VM images ship with version 3.22.0, and upgrading often means building from source.
In any case, SQLite is absolutely wonderful. My favorite way of building products is having a folder for all the DBs that I mount to docker-compose. This release makes it even better.
[0] https://github.com/litements/litequeue
datasette.io
-
Architecture Notes: Datasette
Opened an issue exploring alternatives here: https://github.com/simonw/datasette.io/issues/109
I decided to just drop "any size" but keep "any shape".
-
How to have git pushes auto-deploy to a remote server?
Here's an example from one of my projects: https://github.com/simonw/datasette.io/blob/main/.github/workflows/deploy.yml
-
Schema on write is better to live by
I've come around to almost the opposite approach.
I pull all of the data I can get my hands on (from Twitter, GitHub, Swarm, Apple Health, Pocket, Apple Photos and more) into SQLite database tables that match the schema of the system that they are imported from.
For my own personal Dogsheep (https://simonwillison.net/2020/Nov/14/personal-data-warehous...) that's 119 tables right now.
Then I use SQL queries against those tables to extract and combine data in ways that are useful to me.
If the schema of the systems I am importing from changes, I can update my queries to compensate for the change.
This protects me from having to solve for a standard schema up front - I take whatever those systems give me. But it lets me combine and search across all of the data from disparate systems essentially at runtime.
I even have a search engine for this, which is populated by SQL queries against the different source tables. You can see an example of how that works at https://github.com/simonw/datasette.io/blob/main/templates/d... - which powers the search interface at https://datasette.io/-/beta
-
Using sqlite3 as a notekeeping document graph
I've been exploring this technique more over the past year and I really like it - https://datasette.io (code at https://github.com/simonw/datasette.io ) is a more recent and much more complicated example.
Extracting links from markdown and using them to populate some additional columns or tables at build time would be pretty straight forward.
- Ask HN: What novel tools are you using to write web sites/apps?
-
What's New in SQLite 3.35
I run SQLite in serverless environments (Cloud Run, Vercel, Heroku) for dozens of projects... but the trick is that they all treat the database as a read-only asset.
If I want to deploy updated data, I build a brand new image and deploy the application bundled with the data. I tend to run the deploys for these (including the database build) in GitHub Actions workflows.
This works really well, but only for applications that don't need to apply constant updates more than a few times an hour! If you have a constant stream of updates I still think you're better off using a hosted database like Heroku PostgreSQL or Google Cloud SQL.
One example of a site I deploy like that is https://datasette.io/ - it's built and deployed by this GitHub Actions workflow here: https://github.com/simonw/datasette.io/blob/main/.github/wor...
What are some alternatives?
datasette-dateutil - dateutil functions for Datasette
datasette - An open source multi-tool for exploring and publishing data
pgjobq - Atomic low latency job queues running on Postgres
gomodest - A complex SAAS starter kit using Go, the html/template package, and sprinkles of javascript.
Bedrock - Rock solid distributed database specializing in active/active automatic failover and WAN replication
org-roam-server - A Web Application to Visualize the Org-Roam Database
sqlite_modern_cpp - The C++14 wrapper around sqlite library
openapi-generator - OpenAPI Generator allows generation of API client libraries (SDK generation), server stubs, documentation and configuration automatically given an OpenAPI Spec (v2, v3)
litestream - Streaming replication for SQLite.
headlessui - Completely unstyled, fully accessible UI components, designed to integrate beautifully with Tailwind CSS.
starqueue
SvelteKit - web development, streamlined