rkvdns_examples
aioredis
rkvdns_examples | aioredis | |
---|---|---|
2 | 2 | |
0 | 2,269 | |
- | - | |
7.6 | 0.0 | |
17 days ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rkvdns_examples
-
Monitoring your logs is mostly a tarpit
Seems defeatist to me.
1) There has to be a notion that some things are worth acknowledging as "events"; this leads to the idea that what logs contain is indicators of events. It's a fundamentally philosophical notion. It means you need to take the time to decide what constitutes an event. Hearkening to machine learning and pirates, global warming may inversely correlate with pirates but that doesn't imply causation (either way): you can't just throw statistical techniques at data looking for "hits" and think that's significant. Even if you find some indicator as the article notes it could change; so you should identify some canary indicators and event those as well.
2) Which leads to the point about "bug parts": don't rely on a specific rare indicator, or the failure to identify such an indicator. If you find high-reliability indicators great, but look for other indicators which occur more often, that can be counted, and track those. For instance an indicator that e.g. systemd is restarting /something/, and that's happening more or less frequently, and correlates with a performance observable. If it stops reporting at all, you can start with the presumption that something about logging itself changed.
At this point my philosophical disagreement with centralized logging comes to the fore: it's expensive to load stuff into Splunk. I agree, and that's why I disagree with the approach and prefer federation.
You can use the Totalizer Agent (https://github.com/m3047/rkvdns_examples/tree/main/totalizer...) to increment counters in Redis for regex-identified keys. I don't care whether you use RKVDNS to retrieve the data or something else.
-
Ask HN: How do you monitor your systemd services?
In general this evolves to a SIEM-like solution in IT or gets added to the tag menagerie in OT.
If you're focused on "notifications are bad" note that notifications are push, and pull solutions are possible. Tail logs (or journalctl) and post significant events to Redis (https://github.com/m3047/rkvdns_examples/tree/main/totalizer...) for example.
aioredis
-
Pooling in aioredis may be dangerous
First, it was aioredis library. We are using sentinel based client because with this we can achieve failover easily. Aioredis spawn pool of connections, that transparently reconnects (and here third thing — FOREVER, hello DDOS) to our sentinel nodes, and then to master node. It supposed to do so. Also, we found that if you are not limiting maximum connections count, library will do it for you and set it as 2 ** 31 (here you can see it) — this is fourth thing. Furthermore, pool in our version (2.0.1) not closing automatically, and it makes the problem worse.
-
Tips using Redis with FastAPI
I'm hoping to leverage Redis with my project, and I was curious as to if anyone had any general pointers as to how best to manage the DB connection. I'm using aioredis such that I can leverage the async functionality, but I haven't been very happy with the library's documentation surrounding how best to leverage the connection pool. Most of the examples create and tear down the pool immediately without showing how best to manage its lifespan.
What are some alternatives?
collectd-systemd - collectd plugin to monitor systemd services
pottery - Redis for humans. 🌎🌍🌏
Healthchecks - Open-source cron job and background task monitoring service, written in Python & Django
uvloop - Ultra fast asyncio event loop.
uptime-kuma - A fancy self-hosted monitoring tool
fastapi - FastAPI framework, high performance, easy to learn, fast to code, ready for production
systemd-utils - Random systemd utilities
Dependency Injector - Dependency injection framework for Python
ntfy - Send push notifications to your phone or desktop using PUT/POST
fastapi-redis-cache - A simple and robust caching solution for FastAPI that interprets request header values and creates proper response header values (powered by Redis)
redis-py-sansio - A sansio-first approach to a Python Redis Client.
faust - Python Stream Processing. A Faust fork