pgbouncer
spec
pgbouncer | spec | |
---|---|---|
34 | 62 | |
2,692 | 8,824 | |
3.1% | 3.4% | |
8.7 | 0.0 | |
5 days ago | 16 days ago | |
C | ||
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pgbouncer
-
MongoDB and Load Balancer Support
Thanks to MongoDB drivers all consistently providing connection monitoring and pooling functionality, external connection pooling solutions aren't required (ex: Pgpool, PgBouncer). This allows applications built using MongoDB drivers to be resilient and scalable out of the box, but based on what we understand regarding the number of connections applications establish to MongoDB clusters it stands to reason that at a certain point as our application deployments increase, so will our connections.
-
Minha jornada de otimização de uma aplicação django
Pgbouncer - resolvia o problema do limite de conexões no postgres. Mas a API “saudável” manteve o número de conexões baixo o suficiente.
- PgBouncer 1.21.0 – "The one with prepared statements"
- Pgbouncer adds support for prepared statements
-
PgBouncer is useful, important, and fraught with peril
Pgbouncer maintainer here. Overall I think this is a great description of the tradeoffs that PgBouncer brings and how to work around/manage them. I'm actively working on fixing quite a few of the issues in this blog though
1. Named protocol-level prepared statements in transaction mode has a PR that's pretty close to being merged: https://github.com/pgbouncer/pgbouncer/pull/845
-
Supavisor: Scaling Postgres to 1 Million Connections
A common solution is connection pooling. Supabase currently offers pgbouncer which is single-threaded, making it difficult to scale. We've seen some novel ways to scale pgbouncer, but we have a few other goals in mind for our platform.
-
Citus 12: Schema-based sharding for PostgreSQL
Great observation! :)
We worked upstream to have `search_path` properly handled (tracked per client) by pgbouncer.
https://github.com/pgbouncer/pgbouncer/commit/8c18fc4d213ad4...
Check config.md in that commit for a verbose, humanized description.
spec
-
The UX of UUIDs
Can use ULID to "fix" some issues
https://github.com/ulid/spec
- Ulid: Universally Unique Lexicographically Sortable Identifier
-
Ask HN: Is it acceptable to use a date as a primary key for a table in Postgres?
Both ULID and UUID v7 have a time code component which can be extracted.
It would be best for indexing to store the actual value in binary, though not strictly necessary as these later UUID standards (unlike conventional UUIDs) use time code prefixes (so indexing clusters.)
https://uuid7.com/
https://github.com/ulid/spec
-
Bye Sequence, Hello UUIDv7
UUIDv7 is a nice idea, and should probably be what people use by default instead of UUIDv4.
For the curious:
* UUIDv4 are 128 bits long, 122 bits of which are random, with 6 bits used for the version. Traditionally displayed as 32 hex characters with 4 dashes, so 36 alphanumeric characters, and compatible with anything that expects a UUID.
* UUIDv7 are 128 bits long, 48 bits encode a unix timestamp with millisecond precision, 6 bits are for the version, and 74 bits are random. You're expected to display them the same as other UUIDs, and should be compatible with basically anything that expects a UUID. (Would be a very odd system that parses a UUID and throws an error because it doesn't recognise v7, but I guess it could happen, in theory?)
* ULIDs (https://github.com/ulid/spec) are 128 bits long, 48 bits encode a unix timestamp with millisecond precision, 80 bits are random. You're expected to display them in Crockford's base32, so 26 alphanumeric characters. Compatible with almost everything that expects a UUID (since they're the right length). Spec has some dumb quirks if followed literally but thankfully they mostly don't hurt things.
* KSUIDs (https://github.com/segmentio/ksuid) are 160 bits long, 32 bits encode a timestamp with second precision and a custom epoch of May 13th, 2014, and 128 bits are random. You're expected to display them in base62, so 27 alphanumeric characters. Since they're a different length, they're not compatible with UUIDs.
I quite like KSUIDs; I think base62 is a smart choice. And while the timestamp portion is a trickier question, KSUIDs use 32 bits which, with second precision (more than good enough), means they won't overflow for well over a century. Whereas UUIDv7s use 48 bits, so even with millisecond precision (not needed) they won't overflow for something like 8000 years. We can argue whether 100 years us future proof enough (I'd argue it probably is), but 8000 years is just silly. Nobody will ever generate a compliant UUIDv7 with any of the first several bits aren't 0. The only downside to KSUIDs is the length isn't UUID compatible (and arguably, that they don't devote 6 bits to a compliant UUID version).
Still feels like there's room for improvement, but for now I think I'd always pick UUIDv7 over UUIDv4 unless there's an very specific reason not to.
-
50 years later, is Two-Phase Locking the best we can do?
I'd love for Postgres to adopt ULID as a first class variant of the same basic 128bit wide binary optimized column type they use for UUIDs, but I don't expect they will, while its "popular" its not likely popular enough to have support for them to maintain it in the long run... Also the smart money ahead of time would have been for the ULID spec to sacrifice a few data bits to leave the version specifying sections of the bit field layout unused in the ULID binary spec (https://github.com/ulid/spec#binary-layout-and-byte-order) for the sake of future compatibility with "proper" UUIDs... Performing one big bulk bitfield modification to a PostgreSQL column would have been much less painful than re-computing appropriate UUIDv7 (or UUIDv8s for some reason) and then having to perform a primary key update on every row in the table.
- FLaNK Stack Weekly for 12 September 2023
- You Don't Need UUID
- UUID Collision
-
Type-safe, K-sortable, globally unique identifier inspired by Stripe IDs
Many people had the same idea. For example ULID https://github.com/ulid/spec is more compact and stores the time so it is lexically ordered.
- ULID: Universally Unique Lexicographically Sortable Identifier
What are some alternatives?
odyssey - Scalable PostgreSQL connection pooler
dynamodb-onetable - DynamoDB access and management for one table designs with NodeJS
asyncpg - A fast PostgreSQL Database Client Library for Python/asyncio.
uuid6-ietf-draft - Next Generation UUID Formats
pgcat - PostgreSQL pooler with sharding, load balancing and failover support. [Moved to: https://github.com/postgresml/pgcat]
kuuid - K-sortable UUID - roughly time-sortable unique id generator
TimescaleDB - An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
python-ksuid - A pure-Python KSUID implementation
pgcat - PostgreSQL pooler with sharding, load balancing and failover support.
ulid-lite - Generate unique, yet sortable identifiers
rds-auth-proxy - A "passwordless" login experience for your AWS RDS
shortuuid.rb - Convert UUIDs & numbers into space efficient and URL-safe Base62 strings, or any other alphabet.