kafkacat
b3-propagation
Our great sponsors
kafkacat | b3-propagation | |
---|---|---|
8 | 3 | |
3,573 | 516 | |
- | 1.7% | |
7.3 | 2.7 | |
over 2 years ago | 2 months ago | |
C | ||
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kafkacat
-
Build a data ingestion pipeline using Kafka, Flink, and CrateDB
To communicate with Kafka, you can use Kafkacat, a command-line tool that allows to produce and consume Kafka messages using a very simple syntax. It also allows you to view the topics' metadata.
-
Event Streaming Like it's 1978
Feels like you could get pretty far with kafkacat and a SQLite database.
- ZooKeeper-free Kafka is out. First Demo
-
Kafcat 0.1.1 release -- a cat for kafka
This is the second release version of Kafcat. Kafcat is a Rust fully async rewrite of kafkacat.
- Primeiros passos com Kafka - Parte 2
-
Spring Cloud Sleuth in action
Consume from the Kafka topic my.topic with kafkacat:
-
5 Things Every Apache Kafka Developer Should Know
From the code above, you can see that to process the headers, simply use the ConsumerRecord.headers() method to return the headers. In our example above, weโre printing the headers out to the console for demonstration purposes. Once you have access to the headers, you can process them as needed. For reading headers from the command line, KIP-431 adds support for optionally printing headers from the ConsoleConsumer, which will be available in the Apache Kafka 2.7.0 release.You can also use kafkacat to view headers from the command line. Hereโs an example command:
-
Streaming data into Kafka S01/E04 โ Loading Log files using Grok Expression
Note: In the example above, we have used kafkacat to consume the topics. The option -o-1 is used to only consume the latest message
b3-propagation
-
OpenTelemetry in 2023
I've been playing with OTEL for a while, with a few backends like Jaeger and Zipkin, and am trying to figure out a way to perform end to end timing measurements across a graph of services triggered by any of several events.
Consider this scenario: There is a collection of services that talk to one another, and not all use HTTP. Say agent A0 makes a connection to agent A1, this is observed by service S0 which triggers service S1 to make calls to S2 and S3, which propagate elsewhere and return answers.
If we limit the scope of this problem to services explicitly making HTTP calls to other services, we can easily use the Propagators API [1] and use X-B3 headers [2] to propagate the trace context (trace ID, span ID, parent span ID) across this graph, from the origin through to the destination and back. This allows me to query the metrics collector (Jaeger or Zipkin) using this trace ID, look at the timestamps originating at the various services and do a T_end - T_start to determine the overall time taken by one call for a round trip across all the related services.
However, this breaks when a subset of these functions cannot propagate the B3 trace IDs for various reasons (e.g., a service is watching a specific state and acts when the state changes). I've been looking into OTEL and other related non-OTEL ways to capture metrics, but it appears there's not much research into this area though it does not seem like a unique or new problem.
Has anyone here looked at this scenario, and have you had any luck with OTEL or other mechanisms to get results?
[1] https://opentelemetry.io/docs/specs/otel/context/api-propaga...
[2] https://github.com/openzipkin/b3-propagation
[3] https://www.w3.org/TR/trace-context/
-
OpenTelemetry and Istio: Everything you need to know
(Note that OpenTelemetry uses, by default, the W3C context propagation specification, while Istio uses the B3 context propagation specification โ this can be modified).
-
Spring Cloud Sleuth in action
The default format for context propagation is B3 so we use headers X-B3-TraceId and X-B3-SpanId
What are some alternatives?
Docker Compose - Define and run multi-container applications with Docker
trace-context-w3c - W3C Trace Context purpose of and what kind of problem it came to solve.
redpanda - Redpanda is a streaming data platform for developers. Kafka API compatible. 10x faster. No ZooKeeper. No JVM!
zipkin - Zipkin is a distributed tracing system
kafcat - a rust port of kafkacat
spring-cloud-sleuth-in-action - ๐ Spring Cloud Sleuth in Action
Apache Kafka - Mirror of Apache Kafka
odigos - Distributed tracing without code changes. ๐ Instantly monitor any application using OpenTelemetry and eBPF
jetstream - JetStream Utilities
community - OpenTelemetry community content
java-pubsublite-kafka
oteps - OpenTelemetry Enhancement Proposals