terraform-aws-jaeger
b3-propagation
terraform-aws-jaeger | b3-propagation | |
---|---|---|
1 | 3 | |
8 | 518 | |
- | 0.8% | |
10.0 | 2.7 | |
about 2 years ago | 3 months ago | |
HCL | ||
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
terraform-aws-jaeger
-
OpenTelemetry in 2023
It's really not that intense. I basically set up my last co's telemetry infrastructure all by myself, using terraform, otel-python, jaeger, and AWS elasticsearch.
This TF project does most of the heavy lift. https://github.com/telia-oss/terraform-aws-jaeger
b3-propagation
-
OpenTelemetry in 2023
I've been playing with OTEL for a while, with a few backends like Jaeger and Zipkin, and am trying to figure out a way to perform end to end timing measurements across a graph of services triggered by any of several events.
Consider this scenario: There is a collection of services that talk to one another, and not all use HTTP. Say agent A0 makes a connection to agent A1, this is observed by service S0 which triggers service S1 to make calls to S2 and S3, which propagate elsewhere and return answers.
If we limit the scope of this problem to services explicitly making HTTP calls to other services, we can easily use the Propagators API [1] and use X-B3 headers [2] to propagate the trace context (trace ID, span ID, parent span ID) across this graph, from the origin through to the destination and back. This allows me to query the metrics collector (Jaeger or Zipkin) using this trace ID, look at the timestamps originating at the various services and do a T_end - T_start to determine the overall time taken by one call for a round trip across all the related services.
However, this breaks when a subset of these functions cannot propagate the B3 trace IDs for various reasons (e.g., a service is watching a specific state and acts when the state changes). I've been looking into OTEL and other related non-OTEL ways to capture metrics, but it appears there's not much research into this area though it does not seem like a unique or new problem.
Has anyone here looked at this scenario, and have you had any luck with OTEL or other mechanisms to get results?
[1] https://opentelemetry.io/docs/specs/otel/context/api-propaga...
[2] https://github.com/openzipkin/b3-propagation
[3] https://www.w3.org/TR/trace-context/
-
OpenTelemetry and Istio: Everything you need to know
(Note that OpenTelemetry uses, by default, the W3C context propagation specification, while Istio uses the B3 context propagation specification – this can be modified).
-
Spring Cloud Sleuth in action
The default format for context propagation is B3 so we use headers X-B3-TraceId and X-B3-SpanId
What are some alternatives?
docs - Prometheus documentation: content and static site generator
trace-context-w3c - W3C Trace Context purpose of and what kind of problem it came to solve.
proposal-async-context - Async Context for JavaScript
zipkin - Zipkin is a distributed tracing system
opentelemetry-lambda - Create your own Lambda Layer in each OTel language using this starter code. Add the Lambda Layer to your Lamdba Function to get tracing with OpenTelemetry.
spring-cloud-sleuth-in-action - 🍀 Spring Cloud Sleuth in Action
opentelemetry-specificatio
odigos - Distributed tracing without code changes. 🚀 Instantly monitor any application using OpenTelemetry and eBPF
oteps - OpenTelemetry Enhancement Proposals
community - OpenTelemetry community content
aws-otel-lambda - AWS Distro for OpenTelemetry - AWS Lambda