opentelemetry-collector-contrib
jaeger
Our great sponsors
opentelemetry-collector-contrib | jaeger | |
---|---|---|
43 | 94 | |
2,546 | 19,409 | |
5.8% | 1.5% | |
10.0 | 9.7 | |
5 days ago | 5 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
opentelemetry-collector-contrib
- OpenTelemetry at Scale: what buffer we can use at the behind to buffer the data?
-
All you need is Wide Events, not "Metrics, Logs and Traces"
The open telemetry collector does just that. https://github.com/open-telemetry/opentelemetry-collector-co...
-
OpenTelemetry Collector Anti-Patterns
There are two official distributions of the OpenTelemetry Collector: Core, and Contrib.
-
OpenTelemetry Journey #00 - Introduction to OpenTelemetry
Maybe, you are asking yourself: "But I already had instrumented my applications with vendor-specific libraries and I'm using their agents and monitoring tools, why should I change to OpenTelemetry?". The answer is: maybe you're right and I don't want to encourage you to update the way how you are doing observability in your applications, that's a hard and complex task. But, if you are starting from scratch or you are not happy with your current observability infrastructure, OpenTelemetry is the best choice, independently of the backend telemetry tool that you are using. I would like to invite you to take a look at the number of exporters available in the collector contrib section, if your backend tracing tool is not there, probably it's already using the Open Telemetry Protocol (OTLP) and you will be able to use the core collector. Otherwise, you should consider changing your backend telemetry tool or contributing to the project creating a new exporter.
-
Building an Observability Stack with Docker
To receive OTLP data, you set up the standard otlp receiver to receive data in HTTP or gRPC format. To forward traces and metrics, a batch processor was defined to accumulate data and send it every 100 milliseconds. Then set up a connection to Tempo (in otlp/tempo exporter, with a standard top exporter) and to Prometheus (in prometheus exporter, with a control exporter). A debug exporter also was added to log info on container standard I/O and see how the collector is working.
-
Spotlight: Sentry for Development
Thanks for the reply. Would the Spotlight sidecar possibly be able to run independently and consume spans emitted by the Sentry exporter[0] or some other similar flow beyond strictly exporting directly from the Sentry SDK provided by Spotlight?
This tooling looks really cool and I'd love to play around with it, but am already pretty entrenched into OTel and funneling data through the collector and don't want to introduce too much additional overhead for devs.
[0] https://github.com/open-telemetry/opentelemetry-collector-co...
-
Amazon EKS Monitoring with OpenTelemetry [Step By Step Guide]
A list of all metric definitions can be found here.
-
Spring Boot Monitoring with Open-Source Tools
receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 hostmetrics: collection_interval: 60s scrapers: cpu: {} disk: {} load: {} filesystem: {} memory: {} network: {} paging: {} process: mute_process_name_error: true mute_process_exe_error: true mute_process_io_error: true processes: {} prometheus: config: global: scrape_interval: 60s scrape_configs: - job_name: otel-collector-binary scrape_interval: 60s static_configs: - targets: ["localhost:8889>"] - job_name: "jvm-metrics" scrape_interval: 10s metrics_path: "/actuator/prometheus" static_configs: - targets: ["localhost:8090>"] processors: batch: send_batch_size: 1000 timeout: 10s # Ref: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/README.md resourcedetection: detectors: [env, system] # Before system detector, include ec2 for AWS, gcp for GCP and azure for Azure. # Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels. timeout: 2s system: hostname_sources: [os] # alternatively, use [dns,os] for setting FQDN as host.name and os as fallback extensions: health_check: {} zpages: {} exporters: otlp: endpoint: "ingest.{region}.signoz.cloud:443" tls: insecure: false headers: "signoz-access-token": logging: verbosity: normal service: telemetry: metrics: address: 0.0.0.0:8888 extensions: [health_check, zpages] pipelines: metrics: receivers: [otlp] processors: [batch] exporters: [otlp] metrics/internal: receivers: [prometheus, hostmetrics] processors: [resourcedetection, batch] exporters: [otlp] traces: receivers: [otlp] processors: [batch] exporters: [otlp] logs: receivers: [otlp] processors: [batch] exporters: [otlp]
-
Migrating to OpenTelemetry
If you are using the prometheus exporter, you can use the transform processor to get specific resource attributes into metric labels.
With the advantage that you get only the specific attributes you want, thus avoiding a cardinality explosion.
https://github.com/open-telemetry/opentelemetry-collector-co...
-
Exploring the OpenTelemetry Collector
OpenTelemetry Operators
jaeger
-
Observability with OpenTelemetry, Jaeger and Rails
Jaeger maps the flow of requests and data as they traverse a distributed system. These requests may make calls to multiple services, which may introduce their own delays or errors. https://www.jaegertracing.io/
-
Show HN: An open source performance monitoring tool
As engineers at past startups, we often had to debug slow queries, poor load times, inconsistent errors, etc... While tools like Jaegar [2] helped us inspect server-side performance, we had no way to tie user events to the traces we were inspecting. In other words, although we had an idea of what API route was slow, there wasn’t much visibility into the actual bottleneck.
This is where our performance product comes in: we’re rethinking a tracing/performance tool that focuses on bridging the gap between the client and server.
What’s unique about our approach is that we lean heavily into creating traces from the frontend. For example, if you’re using our Next.js SDK, we automatically connect browser HTTP requests with server-side code execution, all from the perspective of a user. We find this much more powerful because you can understand what part of your frontend codebase causes a given trace to occur. There’s an example here [3].
From an instrumentation perspective, we’ve built our SDKs on-top of OTel, so you can create custom spans to expand highlight-created traces in server routes that will transparently roll up into the flame graph you see in our UI. You can also send us raw OTel traces and manually set up the client-server connection if you want. [4] Here’s an example of what a trace looks like with a database integration using our Golang GORM SDK, triggered by a frontend GraphQL query [5] [6].
In terms of how it's built, we continue to rely heavily on ClickHouse as our time-series storage engine. Given that traces require that we also query based on an ID for specific groups of spans (more akin to an OLTP db), we’ve leveraged the power of CH materialized views to make these operations efficient (described here [7]).
To try it out, you can spin up the project with our self hosted docs [8] or use our cloud offering at app.highlight.io. The entire stack runs in docker via a compose file, including an OpenTelemetry collector for data ingestion. You’ll need to point your SDK to export data to it by setting the relevant OTLP endpoint configuration (ie. environment variable OTEL_EXPORTER_OTLP_LOGS_ENDPOINT [9]).
Overall, we’d really appreciate feedback on what we’re building here. We’re also all ears if anyone has opinions on what they’d like to see in a product like this!
[1] https://github.com/highlight/highlight/blob/main/LICENSE
[2] https://www.jaegertracing.io
[3] https://app.highlight.io/1383/sessions/COu90Th4Qc3PVYTXbx9Xe...
[4] https://www.highlight.io/docs/getting-started/native-opentel...
[5] https://static.highlight.io/assets/docs/gorm.png
[6] https://github.com/highlight/highlight/blob/1fc9487a676409f1...
[7] https://highlight.io/blog/clickhouse-materialized-views
[8] https://www.highlight.io/docs/getting-started/self-host/self...
[9] https://opentelemetry.io/docs/concepts/sdk-configuration/otl...
-
Kubernetes Ingress Visibility
For the request following, something like jeager https://www.jaegertracing.io/, because you are talking more about tracing than necessarily logging. For just monitoring, https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack would be the starting point, then it depends. Nginx gives metrics out of the box, then you can pull in the dashboard like https://grafana.com/grafana/dashboards/14314-kubernetes-nginx-ingress-controller-nextgen-devops-nirvana/ , or full metal with something like service mesh monitoring which would provably fulfil most of the requirements
-
Migrating to OpenTelemetry
Have you checked out Jaeger [1]? It is lightweight enough for a personal project, but featureful enough to really help "turn on the lightbulb" with other engineers to show them the difference between logging/monitoring and tracing.
[1] https://www.jaegertracing.io/
-
The Road to GraphQL At Enterprise Scale
From the perspective of the realization of GraphQL infrastructure, the interesting direction is "Finding". How to find the problem? How to find the bottleneck of the system? Distributed Tracing System (DTS) will help answer this question. Distributed tracing is a method of observing requests as they propagate through distributed environments. In our scenario, we have dozens of subgraphs, gateway, and transport layer through which the request goes. We have several tools that can be used to detect the whole lifecycle of the request through the system, e.g. Jaeger, Zipkin or solutions that provided DTS as a part of the solution NewRelic.
-
OpenTelemetry Exporters - Types and Configuration Steps
Jaeger is an open-source, distributed tracing system that monitors and troubleshoots the flow of requests through complex, microservices-based applications, providing a comprehensive view of system interactions.
-
Fault Tolerance in Distributed Systems: Strategies and Case Studies
However, ensuring fault tolerance in distributed systems is not at all easy. These systems are complex, with multiple nodes or components working together. A failure in one node can cascade across the system if not addressed timely. Moreover, the inherently distributed nature of these systems can make it challenging to pinpoint the exact location and cause of fault - that is why modern systems rely heavily on distributed tracing solutions pioneered by Google Dapper and widely available now in Jaeger and OpenTracing. But still, understanding and implementing fault tolerance becomes not just about addressing the failure but predicting and mitigating potential risks before they escalate.
-
Observability in Action Part 3: Enhancing Your Codebase with OpenTelemetry
In this article, we'll use HoneyComb.io as our tracing backend. While there are other tools in the market, some of which can be run on your local machine (e.g., Jaeger), I chose HoneyComb because of their complementary tools that offer improved monitoring of the service and insights into its behavior.
-
Building for Failure
The best way to do this, is with the help of tracing tools such as paid tools such as Honeycomb, or your own instance of the open source Jaeger offering, or perhaps Encore's built in tracing system.
-
Distributed Tracing and OpenTelemetry Guide
In this example, I will create 3 Node.js services (shipping, notification, and courier) using Amplication, add traces to all services, and show how to analyze trace data using Jaeger.
What are some alternatives?
uptrace - Open source APM: OpenTelemetry traces, metrics, and logs
Sentry - Developer-first error tracking and performance monitoring
cockpit-podman - Cockpit UI for podman containers
skywalking - APM, Application Performance Monitoring System
signoz - SigNoz is an open-source observability platform native to OpenTelemetry with logs, traces and metrics in a single application. An open-source alternative to DataDog, NewRelic, etc. 🔥 🖥. 👉 Open source Application Performance Monitoring (APM) & Observability tool
prometheus - The Prometheus monitoring system and time series database.
podman-compose - a script to run docker-compose.yml using podman
traefik - The Cloud Native Application Proxy
Pinpoint - APM, (Application Performance Management) tool for large-scale distributed systems.
serilog-sinks-seq - A Serilog sink that writes events to the Seq structured log server
fluent-bit - Fast and Lightweight Logs and Metrics processor for Linux, BSD, OSX and Windows