opentelemetry-python-contrib
thanos
opentelemetry-python-contrib | thanos | |
---|---|---|
3 | 66 | |
627 | 12,638 | |
6.2% | 0.7% | |
9.4 | 9.6 | |
6 days ago | 4 days ago | |
Python | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
opentelemetry-python-contrib
-
OpenTelemetry for Python: The Hard Way
In my last blog post, I showed y’all how to instrument Python code with OpenTelemetry (OTel), à la auto-instrumentation. You may also recall from that post that I recommended using the Python auto-instrumentation binary even for non-auto-instrumented libraries, because it abstracts all that pesky OTel config stuff so nicely. When you use it, along with any applicable Python auto-instrumentation libraries (installed courtesy of opentelemetry-bootstrap), it takes care of context propagation across related services for you.
-
Auto-Instrumentation Is Magic: Using OpenTelemetry Python with Lightstep
More specifically, auto-instrumentation uses shims or bytecode instrumentation agents to intercept your code at runtime or at compile-time to add tracing and metrics instrumentation to the libraries and frameworks you depend on. The beauty of auto-instrumentation is that it requires a minimum amount of effort. Sit back, relax, and enjoy the show. A number of popular Python libraries are auto-instrumented, including Flask and Django. You can find the full list here.
-
Do i really want to mess with OpenTelemetry, or just hook straight into Datadog
And sure, there's gaps and those are awful when you get to them. But writing minimal tracing integration is pretty easy. This is the full source of the psycopg2 instrumentation. https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/instrumentation/opentelemetry-instrumentation-psycopg2/src/opentelemetry/instrumentation/psycopg2/__init__.py
thanos
-
Looking for a way to remote in to K's of raspberry pi's...
Monitoring = netdata on each RPi https://www.netdata.cloud/ binded to the vpn interface being scraped into a prometeus thaons https://thanos.io/ setup with grafana to give management the Green all is good screens (very important).
-
thanos VS openobserve - a user suggested alternative
2 projects | 30 Aug 2023
- FLaNK Stack Weekly for 24 July 2023
- FLaNK Stack Weekly for 10 July 2023
-
Monitoring multiple kubernetes cluster with single Prometheus operator
Sounds like you want something like Thanos
-
Is anyone frustrated with anything about Prometheus?
Yes, but also no. The Prometheus ecosystem already has two FOSS time-series databases that are complementary to Prometheus itself. Thanos and Mimir. Not to mention M3db, developed at Uber, and Cortex, then ancestor of Mimir. There's a bunch of others I won't mention as it would take too long.
-
Thousandeyes Pricing Model
Long term storage all depends on your needs and sophistication. I use Thanos for our system since it has an extremely flexible scaling system. But there is also Grafana Mimir. They're both similar in that they use Prometheus TSDB format as part of the underlying storage. One nice Thanos advantage is that it does do downsampling in addition to being able to store raw metric data for a long time. It will auto-select downsampled data to make requests faster.
-
Monitoring many cluster k8s
You can aggregate all your clusters Prometheus metrics together with a wonderful tool called Thanos. This will allow you to use just a single Grafana instance against Thanos and using a label select which cluster you wish to see metrics from. The downside of this, is that none of the Grafana dashboards from the internet will work as-is. You'll need to customize all of them for Thanos support. The other downside is, you have a single point of failure, and (see next item) you can't customize who can access what in regards to your dev vs production data/metrics/access.
-
Best unicorn monitoring system?
Depending on how you want to set things up, you can use Thanos or Mimir to create the single-pane-of-glass view of your data.
-
Prometheus vs EFS: I don't know who to believe
You could look at something like Thanos and store your data in S3: https://thanos.io/
What are some alternatives?
vector - A high-performance observability data pipeline.
mimir - Grafana Mimir provides horizontally scalable, highly available, multi-tenant, long-term storage for Prometheus.
debug-toolkit - A modern code-injection framework for Python. Like Pyrasite but Kubernetes-aware.
VictoriaMetrics - VictoriaMetrics: fast, cost-effective monitoring solution and time series database
opentelemetry-python - OpenTelemetry Python API and SDK
cortex - A horizontally scalable, highly available, multi-tenant, long term Prometheus.
opentelemetry-examples - Example code and resources for working with OpenTelemetry, provided by Lightstep
promscale - [DEPRECATED] Promscale is a unified metric and trace observability backend for Prometheus, Jaeger and OpenTelemetry built on PostgreSQL and TimescaleDB.
opentelemetry.io - The OpenTelemetry website and documentation
Telegraf - Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.
opentelemetry-specification - Specifications for OpenTelemetry
istio - Connect, secure, control, and observe services.