opentelemetry-collector-contrib
opentelemetry-proto
Our great sponsors
opentelemetry-collector-contrib | opentelemetry-proto | |
---|---|---|
43 | 8 | |
2,546 | 524 | |
5.8% | 3.4% | |
10.0 | 8.0 | |
5 days ago | 5 days ago | |
Go | Makefile | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
opentelemetry-collector-contrib
- OpenTelemetry at Scale: what buffer we can use at the behind to buffer the data?
-
All you need is Wide Events, not "Metrics, Logs and Traces"
The open telemetry collector does just that. https://github.com/open-telemetry/opentelemetry-collector-co...
-
OpenTelemetry Collector Anti-Patterns
There are two official distributions of the OpenTelemetry Collector: Core, and Contrib.
-
OpenTelemetry Journey #00 - Introduction to OpenTelemetry
Maybe, you are asking yourself: "But I already had instrumented my applications with vendor-specific libraries and I'm using their agents and monitoring tools, why should I change to OpenTelemetry?". The answer is: maybe you're right and I don't want to encourage you to update the way how you are doing observability in your applications, that's a hard and complex task. But, if you are starting from scratch or you are not happy with your current observability infrastructure, OpenTelemetry is the best choice, independently of the backend telemetry tool that you are using. I would like to invite you to take a look at the number of exporters available in the collector contrib section, if your backend tracing tool is not there, probably it's already using the Open Telemetry Protocol (OTLP) and you will be able to use the core collector. Otherwise, you should consider changing your backend telemetry tool or contributing to the project creating a new exporter.
-
Building an Observability Stack with Docker
To receive OTLP data, you set up the standard otlp receiver to receive data in HTTP or gRPC format. To forward traces and metrics, a batch processor was defined to accumulate data and send it every 100 milliseconds. Then set up a connection to Tempo (in otlp/tempo exporter, with a standard top exporter) and to Prometheus (in prometheus exporter, with a control exporter). A debug exporter also was added to log info on container standard I/O and see how the collector is working.
-
Spotlight: Sentry for Development
Thanks for the reply. Would the Spotlight sidecar possibly be able to run independently and consume spans emitted by the Sentry exporter[0] or some other similar flow beyond strictly exporting directly from the Sentry SDK provided by Spotlight?
This tooling looks really cool and I'd love to play around with it, but am already pretty entrenched into OTel and funneling data through the collector and don't want to introduce too much additional overhead for devs.
[0] https://github.com/open-telemetry/opentelemetry-collector-co...
-
Amazon EKS Monitoring with OpenTelemetry [Step By Step Guide]
A list of all metric definitions can be found here.
-
Spring Boot Monitoring with Open-Source Tools
receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 hostmetrics: collection_interval: 60s scrapers: cpu: {} disk: {} load: {} filesystem: {} memory: {} network: {} paging: {} process: mute_process_name_error: true mute_process_exe_error: true mute_process_io_error: true processes: {} prometheus: config: global: scrape_interval: 60s scrape_configs: - job_name: otel-collector-binary scrape_interval: 60s static_configs: - targets: ["localhost:8889>"] - job_name: "jvm-metrics" scrape_interval: 10s metrics_path: "/actuator/prometheus" static_configs: - targets: ["localhost:8090>"] processors: batch: send_batch_size: 1000 timeout: 10s # Ref: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/README.md resourcedetection: detectors: [env, system] # Before system detector, include ec2 for AWS, gcp for GCP and azure for Azure. # Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels. timeout: 2s system: hostname_sources: [os] # alternatively, use [dns,os] for setting FQDN as host.name and os as fallback extensions: health_check: {} zpages: {} exporters: otlp: endpoint: "ingest.{region}.signoz.cloud:443" tls: insecure: false headers: "signoz-access-token": logging: verbosity: normal service: telemetry: metrics: address: 0.0.0.0:8888 extensions: [health_check, zpages] pipelines: metrics: receivers: [otlp] processors: [batch] exporters: [otlp] metrics/internal: receivers: [prometheus, hostmetrics] processors: [resourcedetection, batch] exporters: [otlp] traces: receivers: [otlp] processors: [batch] exporters: [otlp] logs: receivers: [otlp] processors: [batch] exporters: [otlp]
-
Migrating to OpenTelemetry
If you are using the prometheus exporter, you can use the transform processor to get specific resource attributes into metric labels.
With the advantage that you get only the specific attributes you want, thus avoiding a cardinality explosion.
https://github.com/open-telemetry/opentelemetry-collector-co...
-
Exploring the OpenTelemetry Collector
OpenTelemetry Operators
opentelemetry-proto
-
OpenTelemetry Journey #00 - Introduction to OpenTelemetry
Maybe, you are asking yourself: "But I already had instrumented my applications with vendor-specific libraries and I'm using their agents and monitoring tools, why should I change to OpenTelemetry?". The answer is: maybe you're right and I don't want to encourage you to update the way how you are doing observability in your applications, that's a hard and complex task. But, if you are starting from scratch or you are not happy with your current observability infrastructure, OpenTelemetry is the best choice, independently of the backend telemetry tool that you are using. I would like to invite you to take a look at the number of exporters available in the collector contrib section, if your backend tracing tool is not there, probably it's already using the Open Telemetry Protocol (OTLP) and you will be able to use the core collector. Otherwise, you should consider changing your backend telemetry tool or contributing to the project creating a new exporter.
-
Did OpenTelemetry deliver on its promise in 2023?
Here's the example payloads for OTLP over JSON and example of how to ingest them: https://github.com/open-telemetry/opentelemetry-proto/tree/m...
-
Amazon EKS Monitoring with OpenTelemetry [Step By Step Guide]
An OTLP receiver can receive data via gRPC or HTTP using the OTLP format. There are advanced configurations that you can enable via the YAML file.
-
Transition to OpenTelemetry, enhanced policy testing, and more - Cerbos v0.32
Cerbos fully transitioned from OpenCensus to OpenTelemetry, a move that significantly boosts our metrics and tracing capabilities. This shift allows for more efficient integration with a variety of observability products supporting the OpenTelemetry protocol (OTLP) but also offers the flexibility to use push metrics and fine-tune trace sampling. With this update, configuration through the tracing block in Cerbos files is deprecated in favor of using OpenTelemetry environment variables.
-
OpenTelemetry is not just for Monitoring and Troubleshooting any longer. Announcing Tracetest Open Beta!
Networking is Easy (Really!) Since you install the agent directly into the environment where you are running your application, there is no complex networking. When developing in localMode, the agent listens on the common OpenTelemetry Line Protocol (OTLP) on ports 4317 & 4318 automatically.
-
OpenTelemetry in 2023
Oh nice, thank you (and also solumos) for the links! It looks like oteps/pull/171 (merged June 2023) expanded and superseded the opentelemetry-proto/pull/346 PR (closed Jul 2022) [0]. The former resulted in merging OpenTelemetry Enhancement Proposal 156 [1], with some interesting results especially for 'Phase 2' where they implemented columnar storage end-to-end (see the Validation section [2]):
* For univariate time series, OTel Arrow is 2 to 2.5 better in terms of bandwidth reduction ... and the end-to-end speed is 3.1 to 11.2 times faster
* For multivariate time series, OTel Arrow is 3 to 7 times better in terms of bandwidth reduction ... Phase 2 has [not yet] been .. estimated but similar results are expected.
* For logs, OTel Arrow is 1.6 to 2 times better in terms of bandwidth reduction ... and the end-to-end speed is 2.3 to 4.86 times faster
* For traces, OTel Arrow is 1.7 to 2.8 times better in terms of bandwidth reduction ... and the end-to-end speed is 3.37 to 6.16 times faster
[0]: https://github.com/open-telemetry/opentelemetry-proto/pull/3...
[1]: https://github.com/open-telemetry/oteps/blob/main/text/0156-...
[2]: https://github.com/open-telemetry/oteps/blob/main/text/0156-...
-
Is Protobuf.js Faster Than JSON?
We then modified the benchmark to encode our example data which is an opentelemetry trace data.
What are some alternatives?
uptrace - Open source APM: OpenTelemetry traces, metrics, and logs
apm-server - APM Server
cockpit-podman - Cockpit UI for podman containers
odigos - Distributed tracing without code changes. 🚀 Instantly monitor any application using OpenTelemetry and eBPF
signoz - SigNoz is an open-source observability platform native to OpenTelemetry with logs, traces and metrics in a single application. An open-source alternative to DataDog, NewRelic, etc. 🔥 🖥. 👉 Open source Application Performance Monitoring (APM) & Observability tool
opentelemetry-java - OpenTelemetry Java SDK
podman-compose - a script to run docker-compose.yml using podman
protobuf - Protocol Buffers for JavaScript (& TypeScript).
traefik - The Cloud Native Application Proxy
community - OpenTelemetry community content
serilog-sinks-seq - A Serilog sink that writes events to the Seq structured log server
opentelemetry-collector - OpenTelemetry Collector