sloth
opentelemetry-collector-contrib
sloth | opentelemetry-collector-contrib | |
---|---|---|
11 | 43 | |
1,949 | 2,567 | |
- | 3.2% | |
0.0 | 10.0 | |
2 months ago | about 11 hours ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sloth
-
SLOscribe: embed SLO/SLI into GO source code
It’s a CLI that allows developers to embed SLO annotation into GO code as comments and generate Prometheus alert groups when paired with Sloth, https://github.com/slok/sloth.
-
help setting SLIs/SLOs
SLOTH: https://github.com/slok/sloth
-
Observability Mythbusters: Yes, Observability-Landscape-as-Code is a Thing
Note: Although it’s outside of the scope of this post to dig deep into this topic, in case you’re curious, you can check out what an OpenSLO YAML definition looks like here.
- Pyrra v0.3.0 released
-
What you use for observability?
The actual hard part is standardizing all teams on SLI/SLO-based thinking. For that we're looking at tools like Sloth.
- How do you measure the reliability of a Kubernetes platform?
-
Calculating Remaining Error Budget
Have a look at sloth (https://github.com/slok/sloth) which will help you generate SLOs and error budgets given a PromQL query. This might be easier than trying to calculate it yourself. Plus, it's "metrics as code" and OpenSLO spec compliant.
-
openSLO
If you are in k8s and use Prometheus you could take a look at sloth: https://github.com/slok/sloth which can either generate the rules/alerts for you, or can run as an operator and allows you to write SLOs as k8s kinds.
-
SLI/Error Budget Calculators and management
Check out https://github.com/slok/sloth
- SLO calculation
opentelemetry-collector-contrib
- OpenTelemetry at Scale: what buffer we can use at the behind to buffer the data?
-
All you need is Wide Events, not "Metrics, Logs and Traces"
The open telemetry collector does just that. https://github.com/open-telemetry/opentelemetry-collector-co...
-
OpenTelemetry Collector Anti-Patterns
There are two official distributions of the OpenTelemetry Collector: Core, and Contrib.
-
OpenTelemetry Journey #00 - Introduction to OpenTelemetry
Maybe, you are asking yourself: "But I already had instrumented my applications with vendor-specific libraries and I'm using their agents and monitoring tools, why should I change to OpenTelemetry?". The answer is: maybe you're right and I don't want to encourage you to update the way how you are doing observability in your applications, that's a hard and complex task. But, if you are starting from scratch or you are not happy with your current observability infrastructure, OpenTelemetry is the best choice, independently of the backend telemetry tool that you are using. I would like to invite you to take a look at the number of exporters available in the collector contrib section, if your backend tracing tool is not there, probably it's already using the Open Telemetry Protocol (OTLP) and you will be able to use the core collector. Otherwise, you should consider changing your backend telemetry tool or contributing to the project creating a new exporter.
-
Building an Observability Stack with Docker
To receive OTLP data, you set up the standard otlp receiver to receive data in HTTP or gRPC format. To forward traces and metrics, a batch processor was defined to accumulate data and send it every 100 milliseconds. Then set up a connection to Tempo (in otlp/tempo exporter, with a standard top exporter) and to Prometheus (in prometheus exporter, with a control exporter). A debug exporter also was added to log info on container standard I/O and see how the collector is working.
-
Spotlight: Sentry for Development
Thanks for the reply. Would the Spotlight sidecar possibly be able to run independently and consume spans emitted by the Sentry exporter[0] or some other similar flow beyond strictly exporting directly from the Sentry SDK provided by Spotlight?
This tooling looks really cool and I'd love to play around with it, but am already pretty entrenched into OTel and funneling data through the collector and don't want to introduce too much additional overhead for devs.
[0] https://github.com/open-telemetry/opentelemetry-collector-co...
-
Amazon EKS Monitoring with OpenTelemetry [Step By Step Guide]
A list of all metric definitions can be found here.
-
Spring Boot Monitoring with Open-Source Tools
receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 hostmetrics: collection_interval: 60s scrapers: cpu: {} disk: {} load: {} filesystem: {} memory: {} network: {} paging: {} process: mute_process_name_error: true mute_process_exe_error: true mute_process_io_error: true processes: {} prometheus: config: global: scrape_interval: 60s scrape_configs: - job_name: otel-collector-binary scrape_interval: 60s static_configs: - targets: ["localhost:8889>"] - job_name: "jvm-metrics" scrape_interval: 10s metrics_path: "/actuator/prometheus" static_configs: - targets: ["localhost:8090>"] processors: batch: send_batch_size: 1000 timeout: 10s # Ref: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/README.md resourcedetection: detectors: [env, system] # Before system detector, include ec2 for AWS, gcp for GCP and azure for Azure. # Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels. timeout: 2s system: hostname_sources: [os] # alternatively, use [dns,os] for setting FQDN as host.name and os as fallback extensions: health_check: {} zpages: {} exporters: otlp: endpoint: "ingest.{region}.signoz.cloud:443" tls: insecure: false headers: "signoz-access-token": logging: verbosity: normal service: telemetry: metrics: address: 0.0.0.0:8888 extensions: [health_check, zpages] pipelines: metrics: receivers: [otlp] processors: [batch] exporters: [otlp] metrics/internal: receivers: [prometheus, hostmetrics] processors: [resourcedetection, batch] exporters: [otlp] traces: receivers: [otlp] processors: [batch] exporters: [otlp] logs: receivers: [otlp] processors: [batch] exporters: [otlp]
-
Migrating to OpenTelemetry
If you are using the prometheus exporter, you can use the transform processor to get specific resource attributes into metric labels.
With the advantage that you get only the specific attributes you want, thus avoiding a cardinality explosion.
https://github.com/open-telemetry/opentelemetry-collector-co...
-
Exploring the OpenTelemetry Collector
OpenTelemetry Operators
What are some alternatives?
pyrra - Making SLOs with Prometheus manageable, accessible, and easy to use for everyone!
uptrace - Open source APM: OpenTelemetry traces, metrics, and logs
slo-computer - SLOs, Error windows and alerts are complicated. Here an attempt to make it easy SLO Computer makes setting and monitoring SLOs for all your services intuitively seamless and blazingly fast. Community Support on Discord - https://discord.com/invite/Q3p2EEucx9
cockpit-podman - Cockpit UI for podman containers
kube-prometheus - Use Prometheus to monitor Kubernetes and applications running on Kubernetes
signoz - SigNoz is an open-source observability platform native to OpenTelemetry with logs, traces and metrics in a single application. An open-source alternative to DataDog, NewRelic, etc. 🔥 🖥. 👉 Open source Application Performance Monitoring (APM) & Observability tool
cloudprober - [Moved to cloudprober/cloudprober] An active monitoring software to detect failures before your customers do.
podman-compose - a script to run docker-compose.yml using podman
OpenSLO - Open specification for defining and expressing service level objectives (SLO)
traefik - The Cloud Native Application Proxy
kube-state-metrics - Add-on agent to generate and expose cluster-level metrics.
serilog-sinks-seq - A Serilog sink that writes events to the Seq structured log server