opentelemetry-collector
opentelemetry-collector-contrib
Our great sponsors
opentelemetry-collector | opentelemetry-collector-contrib | |
---|---|---|
16 | 43 | |
3,880 | 2,546 | |
3.9% | 5.8% | |
9.9 | 10.0 | |
about 11 hours ago | about 23 hours ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
opentelemetry-collector
-
OpenTelemetry Collector Anti-Patterns
But how does one monitor a Collector? The OTel Collector already emits metrics for the purposes of its own monitoring. These can then be sent to your Observability backend for monitoring.
-
OpenTelemetry Journey #00 - Introduction to OpenTelemetry
Maybe, you are asking yourself: "But I already had instrumented my applications with vendor-specific libraries and I'm using their agents and monitoring tools, why should I change to OpenTelemetry?". The answer is: maybe you're right and I don't want to encourage you to update the way how you are doing observability in your applications, that's a hard and complex task. But, if you are starting from scratch or you are not happy with your current observability infrastructure, OpenTelemetry is the best choice, independently of the backend telemetry tool that you are using. I would like to invite you to take a look at the number of exporters available in the collector contrib section, if your backend tracing tool is not there, probably it's already using the Open Telemetry Protocol (OTLP) and you will be able to use the core collector. Otherwise, you should consider changing your backend telemetry tool or contributing to the project creating a new exporter.
-
Building an Observability Stack with Docker
To receive OTLP data, you set up the standard otlp receiver to receive data in HTTP or gRPC format. To forward traces and metrics, a batch processor was defined to accumulate data and send it every 100 milliseconds. Then set up a connection to Tempo (in otlp/tempo exporter, with a standard top exporter) and to Prometheus (in prometheus exporter, with a control exporter). A debug exporter also was added to log info on container standard I/O and see how the collector is working.
-
Amazon EKS Monitoring with OpenTelemetry [Step By Step Guide]
You can find more details on advanced configurations here.
-
Go 1.21
> opentelemetry is basically a house of antipatterns
"Look on My Works Ye Mighty and Despair!"
https://github.com/open-telemetry/opentelemetry-collector/tr... -> https://github.com/open-telemetry/opentelemetry-collector-re... ... and then a reasonable person trying to load that mess into their head may ask 'err, what's the difference between go.opentelemetry.io/collector and github.com/open-telemetry/opentelemetry-collector-contrib?'
$ curl -fsS go.opentelemetry.io/collector | grep go-import
-
Options Pattern in Golang
open-telemetry/opentelemetry-collector: OpenTelemetry Collector (github.com)
-
Display CockroachDB metrics in Splunk Dashboards
There are 2 collector types: the core and the contrib. I have used the contrib as it features the splunk_hec exporter.
-
OpenTelemetry Collector on Kubernetes – Part 1
We are setting the deployment to have exactly 1 replica and setting the container CPU and memory limits according to the minimum that was checked for performance in their docs.
-
Observability Mythbusters: How hard is it to get started with OpenTelemetry?
Lightstep ingests data in native OpenTelemetry Protocol (OTLP) format, so we will use the OTLP Exporter. The exporter can be called either otlp or follow the naming format otlp/. We could call it otlp/bob if we wanted to. We're calling our exporter otlp/ls to signal to us that we are using the OTLP exporter to send the data to Lightstep.
-
OpenTelemetry Collector: A Friendly Guide for Devs
Then, we set up a batch processor that batches up the spans together and every 1 second sends the batch forward. In production, you would want more than 1 second, but I set this here to 1 second for instant feedback in Jaeger.
opentelemetry-collector-contrib
- OpenTelemetry at Scale: what buffer we can use at the behind to buffer the data?
-
All you need is Wide Events, not "Metrics, Logs and Traces"
The open telemetry collector does just that. https://github.com/open-telemetry/opentelemetry-collector-co...
-
OpenTelemetry Collector Anti-Patterns
There are two official distributions of the OpenTelemetry Collector: Core, and Contrib.
-
OpenTelemetry Journey #00 - Introduction to OpenTelemetry
Maybe, you are asking yourself: "But I already had instrumented my applications with vendor-specific libraries and I'm using their agents and monitoring tools, why should I change to OpenTelemetry?". The answer is: maybe you're right and I don't want to encourage you to update the way how you are doing observability in your applications, that's a hard and complex task. But, if you are starting from scratch or you are not happy with your current observability infrastructure, OpenTelemetry is the best choice, independently of the backend telemetry tool that you are using. I would like to invite you to take a look at the number of exporters available in the collector contrib section, if your backend tracing tool is not there, probably it's already using the Open Telemetry Protocol (OTLP) and you will be able to use the core collector. Otherwise, you should consider changing your backend telemetry tool or contributing to the project creating a new exporter.
-
Building an Observability Stack with Docker
To receive OTLP data, you set up the standard otlp receiver to receive data in HTTP or gRPC format. To forward traces and metrics, a batch processor was defined to accumulate data and send it every 100 milliseconds. Then set up a connection to Tempo (in otlp/tempo exporter, with a standard top exporter) and to Prometheus (in prometheus exporter, with a control exporter). A debug exporter also was added to log info on container standard I/O and see how the collector is working.
-
Spotlight: Sentry for Development
Thanks for the reply. Would the Spotlight sidecar possibly be able to run independently and consume spans emitted by the Sentry exporter[0] or some other similar flow beyond strictly exporting directly from the Sentry SDK provided by Spotlight?
This tooling looks really cool and I'd love to play around with it, but am already pretty entrenched into OTel and funneling data through the collector and don't want to introduce too much additional overhead for devs.
[0] https://github.com/open-telemetry/opentelemetry-collector-co...
-
Amazon EKS Monitoring with OpenTelemetry [Step By Step Guide]
A list of all metric definitions can be found here.
-
Spring Boot Monitoring with Open-Source Tools
receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 hostmetrics: collection_interval: 60s scrapers: cpu: {} disk: {} load: {} filesystem: {} memory: {} network: {} paging: {} process: mute_process_name_error: true mute_process_exe_error: true mute_process_io_error: true processes: {} prometheus: config: global: scrape_interval: 60s scrape_configs: - job_name: otel-collector-binary scrape_interval: 60s static_configs: - targets: ["localhost:8889>"] - job_name: "jvm-metrics" scrape_interval: 10s metrics_path: "/actuator/prometheus" static_configs: - targets: ["localhost:8090>"] processors: batch: send_batch_size: 1000 timeout: 10s # Ref: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/README.md resourcedetection: detectors: [env, system] # Before system detector, include ec2 for AWS, gcp for GCP and azure for Azure. # Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels. timeout: 2s system: hostname_sources: [os] # alternatively, use [dns,os] for setting FQDN as host.name and os as fallback extensions: health_check: {} zpages: {} exporters: otlp: endpoint: "ingest.{region}.signoz.cloud:443" tls: insecure: false headers: "signoz-access-token": logging: verbosity: normal service: telemetry: metrics: address: 0.0.0.0:8888 extensions: [health_check, zpages] pipelines: metrics: receivers: [otlp] processors: [batch] exporters: [otlp] metrics/internal: receivers: [prometheus, hostmetrics] processors: [resourcedetection, batch] exporters: [otlp] traces: receivers: [otlp] processors: [batch] exporters: [otlp] logs: receivers: [otlp] processors: [batch] exporters: [otlp]
-
Migrating to OpenTelemetry
If you are using the prometheus exporter, you can use the transform processor to get specific resource attributes into metric labels.
With the advantage that you get only the specific attributes you want, thus avoiding a cardinality explosion.
https://github.com/open-telemetry/opentelemetry-collector-co...
-
Exploring the OpenTelemetry Collector
OpenTelemetry Operators
What are some alternatives?
go-sql-driver/mysql - Go MySQL Driver is a MySQL driver for Go's (golang) database/sql package
uptrace - Open source APM: OpenTelemetry traces, metrics, and logs
GORM - The fantastic ORM library for Golang, aims to be developer friendly
cockpit-podman - Cockpit UI for podman containers
jaeger - CNCF Jaeger, a Distributed Tracing Platform
signoz - SigNoz is an open-source observability platform native to OpenTelemetry with logs, traces and metrics in a single application. An open-source alternative to DataDog, NewRelic, etc. 🔥 🖥. 👉 Open source Application Performance Monitoring (APM) & Observability tool
go-ethereum - Go implementation of the Ethereum protocol
podman-compose - a script to run docker-compose.yml using podman
argo-cd - Declarative Continuous Deployment for Kubernetes
traefik - The Cloud Native Application Proxy
prometheus - The Prometheus monitoring system and time series database.
serilog-sinks-seq - A Serilog sink that writes events to the Seq structured log server