pgbouncer
py-spy
pgbouncer | py-spy | |
---|---|---|
34 | 25 | |
2,648 | 11,850 | |
3.8% | - | |
8.7 | 6.4 | |
6 days ago | 20 days ago | |
C | Rust | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pgbouncer
-
MongoDB and Load Balancer Support
Thanks to MongoDB drivers all consistently providing connection monitoring and pooling functionality, external connection pooling solutions aren't required (ex: Pgpool, PgBouncer). This allows applications built using MongoDB drivers to be resilient and scalable out of the box, but based on what we understand regarding the number of connections applications establish to MongoDB clusters it stands to reason that at a certain point as our application deployments increase, so will our connections.
-
Minha jornada de otimização de uma aplicação django
Pgbouncer - resolvia o problema do limite de conexões no postgres. Mas a API “saudável” manteve o número de conexões baixo o suficiente.
- PgBouncer 1.21.0 – "The one with prepared statements"
- Pgbouncer adds support for prepared statements
-
PgBouncer is useful, important, and fraught with peril
Pgbouncer maintainer here. Overall I think this is a great description of the tradeoffs that PgBouncer brings and how to work around/manage them. I'm actively working on fixing quite a few of the issues in this blog though
1. Named protocol-level prepared statements in transaction mode has a PR that's pretty close to being merged: https://github.com/pgbouncer/pgbouncer/pull/845
-
Supavisor: Scaling Postgres to 1 Million Connections
A common solution is connection pooling. Supabase currently offers pgbouncer which is single-threaded, making it difficult to scale. We've seen some novel ways to scale pgbouncer, but we have a few other goals in mind for our platform.
-
Citus 12: Schema-based sharding for PostgreSQL
Great observation! :)
We worked upstream to have `search_path` properly handled (tracked per client) by pgbouncer.
https://github.com/pgbouncer/pgbouncer/commit/8c18fc4d213ad4...
Check config.md in that commit for a verbose, humanized description.
py-spy
- Minha jornada de otimização de uma aplicação django
- Graphical Python Profiler
-
Grasshopper – An Open Source Python Library for Load Testing
For CPU cycles, py-spy[0] is getting more and more used. For RAM, I would like to known too...
[0] -- https://github.com/benfred/py-spy
-
Debugging a Mixed Python and C Language Stack
Theres also Py Spy, a profiling tool that can generate flame charts containing a mix of python and C (or C++) calls.
https://github.com/benfred/py-spy
It's worked really well for my needs
-
python to rust migration
You should profile your consumer to check the bottlenecks. You can use the excellent py-spy(written in Rust). IMO a few usage of Numba there and there should solve your performance issues.
-
Has anyone switched from numpy to Rust?
So as a first step you'll want to profile your program to figure out where it's slow, and hopefully that'll also tell you why it's slow. I'm the (biased) author of the Sciagraph profiler which is designed for this sort of application (https://sciagraph.com) but you can also try py-spy, which isn't as well designed for data processing/analysis applications (e.g. it won't visualize parallelism at all) but can still be informative (https://github.com/benfred/py-spy). Both are written in Rust ;)
-
Trace your Python process line by line with minimal overhead!
Any advantages/disadvantages compared to py-spy [1]?
[1]: https://github.com/benfred/py-spy
-
Python 3.11 delivers.
Python profiling is enabled primarily through cprofile, and can be visualized with help of tools like snakeviz (output flame graph can look like this). There are also memory profilers like memray which does in-depth traces, or sampling profilers like py-spy.
-
Tales of serving ML models with low-latency
A good profiler would be https://github.com/benfred/py-spy . If you run your app/benchmark with it, it should be able to draw a flamegraph telling you where the majority of time is spent. The info here is quite fine grained so it would already tell you where the bottleneck is. Without a full-fledged profiler you can also measure the timings in various parts of the code to understand where the bottleneck is.
-
Profiling a Python library written in Rust (Maturin)
Might be worth raising an issue on py-spy (a python profiler written in rust which "supports profiling native python extensions written in languages like C/C++ or Cython" to see if that can close the loop.
What are some alternatives?
odyssey - Scalable PostgreSQL connection pooler
pyflame
asyncpg - A fast PostgreSQL Database Client Library for Python/asyncio.
pyinstrument - 🚴 Call stack profiler for Python. Shows you why your code is slow!
pgcat - PostgreSQL pooler with sharding, load balancing and failover support. [Moved to: https://github.com/postgresml/pgcat]
python-uncompyle6 - A cross-version Python bytecode decompiler
TimescaleDB - An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
memory_profiler - Monitor Memory usage of Python code
pgcat - PostgreSQL pooler with sharding, load balancing and failover support.
icecream - 🍦 Never use print() to debug again.
rds-auth-proxy - A "passwordless" login experience for your AWS RDS
line_profiler