trace-context-w3c
istio
trace-context-w3c | istio | |
---|---|---|
11 | 88 | |
4 | 34,983 | |
- | 0.8% | |
0.0 | 10.0 | |
about 1 year ago | 6 days ago | |
C# | Go | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
trace-context-w3c
-
Implementing OTel Trace Context Propagation Through Message Brokers with Go
The answer is Context Propagation. The HTTP example is a classic and W3C even covers it. The propagation is adding the important fields from the context into the HTTP headers and having the other application extract those values and inject them into its trace context. This concept applies to any other way of communication. Here, we will focus on message brokers and how you can achieve context propagation for those.
-
OpenTelemetry in 2023
I've been playing with OTEL for a while, with a few backends like Jaeger and Zipkin, and am trying to figure out a way to perform end to end timing measurements across a graph of services triggered by any of several events.
Consider this scenario: There is a collection of services that talk to one another, and not all use HTTP. Say agent A0 makes a connection to agent A1, this is observed by service S0 which triggers service S1 to make calls to S2 and S3, which propagate elsewhere and return answers.
If we limit the scope of this problem to services explicitly making HTTP calls to other services, we can easily use the Propagators API [1] and use X-B3 headers [2] to propagate the trace context (trace ID, span ID, parent span ID) across this graph, from the origin through to the destination and back. This allows me to query the metrics collector (Jaeger or Zipkin) using this trace ID, look at the timestamps originating at the various services and do a T_end - T_start to determine the overall time taken by one call for a round trip across all the related services.
However, this breaks when a subset of these functions cannot propagate the B3 trace IDs for various reasons (e.g., a service is watching a specific state and acts when the state changes). I've been looking into OTEL and other related non-OTEL ways to capture metrics, but it appears there's not much research into this area though it does not seem like a unique or new problem.
Has anyone here looked at this scenario, and have you had any luck with OTEL or other mechanisms to get results?
[1] https://opentelemetry.io/docs/specs/otel/context/api-propaga...
[2] https://github.com/openzipkin/b3-propagation
[3] https://www.w3.org/TR/trace-context/
-
End-to-end tracing with OpenTelemetry
-- https://www.w3.org/TR/trace-context/
-
Event Driven Architecture — 5 Pitfalls to Avoid
For context propagation, why not just reuse the existing trace context that most frameworks and toolkits generate for http requests? I've had to apply some elbow grease to get it play nice but once it does you're able to use tools like Jeager, etc as part of your asynchronous flow as well.
- W3C Recommendation – Trace Context
-
OpenTelemetry and Istio: Everything you need to know
(Note that OpenTelemetry uses, by default, the W3C context propagation specification, while Istio uses the B3 context propagation specification – this can be modified).
-
What is Context Propagation in Distributed Tracing?
World Wide Web Consortium (W3C) has recommendations on the format of trace contexts. The aim is to develop a standardized format of passing trace context over standard protocols like HTTP. It saves a lot of time in distributed tracing implementation and ensures interoperability between various tracing tools.
- My Logging Best Practices
- Validação de entrada de dados e respostas de erro no ASP.NET
-
[c#] Using W3C Trace Context standard in distributed tracing
The main objective is to propagate a message with traceparent id throw two APIs and one worker using W3C trace context standard. The first-api calls the second-api by a http call while the second-api has an asynchronous communication with the worker by a message broker (rabbitmq was chosen for that). Furthermore, zipkin was the trace system chosen (or vendor as the standard call it), being responsible for getting the application traces and building the distributed tracing diagram:
istio
-
Multi-region YugabyteDB deployment on AWS EKS with Istio
AWS EKS provides a managed Kubernetes service, simplifying cluster management and deployment. Istio, an open-source service mesh, enables traffic management, security, and observability across microservices.
-
Improve your EKS cluster with Istio and Cilium : Better networking and security
Istio is a popular open-source service mesh framework that provides a comprehensive solution for managing, securing, and observing microservices-based applications running on Kubernetes.
-
Optimal JMX Exposure Strategy for Kubernetes Multi-Node Architecture
Leverage a service mesh like Istio or Linkerd to manage communication between microservices within the Kubernetes cluster. These service meshes can be configured to intercept JMX traffic and enforce access control policies. Benefits:
-
Open Source Ascendant: The Transformation of Software Development in 2024
Open Source and Cloud Computing: A Match Made in Heaven The cloud is accelerating OSS adoption. Cloud-native technologies like Kubernetes [https://kubernetes.io/] and Istio [https://istio.io/], both open-source projects, are revolutionizing how applications are built and deployed across cloud platforms.
-
Delving Deeper: Enriching Microservices with Golang with CloudWeGo
Consider the case of Bookinfo, a sample application provided by Istio, rewritten using CloudWeGo's Kitex for superior performance and extensibility.
-
How to Build & Deploy Scalable Microservices with NodeJS, TypeScript and Docker || A Comprehesive Guide
It is a dedicated infrastructure layer that manages service-to-service communication, providing features like load balancing, encryption, authentication, and monitoring. Istio deploys sidecar proxies alongside each microservice instance. These proxies handle communication, providing features like load balancing, service discovery, encryption, monitoring and authentication.
-
Caddy for Certs and Istio for Reverse Proxy
5Y old post that sounds like they've done similar here: Caddy Issue Istio Issue but doesn't cover much of the implementation
- Understanding Istio: A Beginner's Guide to Service Mesh
-
Developer’s Guide to Building Kubernetes Cloud Apps ☁️🚀
In a production environment there will be a load balancer setup with an Ingress Controller, Service Mesh or some type of Custom Router. This allows all traffic to be sent to the single load balancer IP address and then route the traffic to a service based on the Domain name or subpath. We are using a NGINX ingress controller but service meshes like Istio have been becoming the most popular solution to use as they offer more segmentation, security and granular control.
-
Progressive Delivery on AKS: A Step-by-Step Guide using Flagger with Istio and FluxCD
Flagger is a progressive delivery tool that enables a Kubernetes operator to automate the promotion or rollback of deployments based on metrics analysis. It supports a variety of metrics including Prometheus, Datadog, and New Relic to name a few. It also works well with Istio service mesh, and can implement progressive traffic splitting between primary and canary releases.
What are some alternatives?
b3-propagation - Repository that describes and sometimes implements B3 propagation
osm - Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
opentelemetry-dotnet - The OpenTelemetry .NET Client
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
Serilog.Exceptions - Log exception details and custom properties that are not output in Exception.ToString().
anthos-service-mesh-packages - Packaged configuration for setting up a Kubernetes cluster with Anthos Service Mesh features enabled
zipkin - Zipkin is a distributed tracing system
crossplane - The Cloud Native Control Plane
opentelemetry-specification - Specifications for OpenTelemetry
falco - Cloud Native Runtime Security
RabbitMQ - Open source RabbitMQ: core server and tier 1 (built-in) plugins
kratos - Your ultimate Go microservices framework for the cloud-native era.