kcat
tracing
kcat | tracing | |
---|---|---|
18 | 52 | |
5,287 | 5,025 | |
- | 1.7% | |
0.0 | 7.9 | |
6 months ago | 7 days ago | |
C | Rust | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kcat
-
JR, quality Random Data from the Command line, part I
So, is JR yet another faking library written in Go? Yes and no. JR indeed implements most of the APIs in fakerjs and Go fake it, but it's also able to stream data directly to stdout, Kafka, Redis and more (Elastic and MongoDB coming). JR can talk directly to Confluent Schema Registry, manage json-schema and Avro schemas, easily maintain coherence and referential integrity. If you need more than what is OOTB in JR, you can also easily pipe your data streams to other cli tools like kcat thanks to its flexibility.
-
Deploy Apache Kafka® on Kubernetes
This deployment creates a kcat container we can use to produce and consume messages.
-
How to Build a Kafka Producer in Rust with Partitioning
Now we don't see any additional output. To verify it worked, let's use kafkacat to consume the topic's events. (We install kafkacat in the Dev Container. Please run the following command in VSCode's terminal)
-
Apache Kafka: A Quickstart Guide for Developers
Before we come to an end here, let's explore one additional helpful tool: kcat (formerly known as kafkacat).
-
AdTech using SingleStoreDB, Kafka and Metabase
Let's look at the data in the ad_events topic from the Kafka broker and see if we can identify the problem. We'll install kcat (formerly kafkacat):
-
Getting Started as a Kafka Developer
kcat (formerly KafkaCat) - https://github.com/edenhill/kcat
-
Your Experience Learning and Implementing Kafka
Start with multiple consumers and produce events (this gives a sense about consistency or need for reliable data) - Producer could be command line or kafkacat
-
Running Apache Kafka on Containers
kcat is an awesome tool to make our life easier, it allows us to read and write from kafka topics without tons of scripts and in a more user-friendly way.
- Unreadable data/log files created by Kafka Producer
-
⌨️ Pipe xlsx files into/from Kafka... From cli with (k)cat 🙀
kcat
tracing
-
Decrusting the tracing crate [video] by Jon Gjengset
The video description is as follows:
In this stream, we peel back the crust on the tracing crate — https://github.com/tokio-rs/tracing/ — and explore its interface, structure, and mechanisms. We talk about spans, events, their attributes and fields, and how to think about them in async code. We also dig into what subscribers are, how they pick up events, and how you can construct your own subscribers through the layer abstraction. For more details about tracing, see https://docs.rs/tracing/latest/tracing/.
-
Vendor lock-in is in the small details
> What's been your biggest issues around ergonomics/amenities for OpenTelemetry?
I can't speak generally, but in the Rust ecosystem the various crates don't play well together. Here's one example: <https://github.com/tokio-rs/tracing/issues/2648> There are four crates involved (tracing-attributes, tracing-opentelemetry, opentelemetry, and opentelemetry-datadog) and none of them fit properly into any of the others.
-
Grimoire - A recipe management application.
The tracing (logging) mechanism in an asynchronous codebase (tracing).
-
How easy is it to swap out your async runtime?
Tracing is Tokio's alternative for async code.
-
Hey Rustaceans! Got a question? Ask here (27/2023)!
At a technical level, in Rust, both [tracing]https://crates.io/crates/tracing) and log are entire ecosystems (though for the latter at least there's also third party logging frameworks), and there's at least a bridge from log to tracing.
-
How can I write a tracing subscriber that saves to a database?
I am using https://github.com/tokio-rs/tracing for logging purposes in my application. I would like to develop a feature wherein logs should be saved to a database table (via sea-orm). Something similar is this, but it does not solve my needs fully.
-
A locking war story
I've used the tracing infrastructure with tracing_flame to profile some hot paths in async code: https://github.com/tokio-rs/tracing/tree/master/tracing-flame
-
I was wrong about rust
Oh nice! IIRC when I checked, it was the Unicode tables that smashed the code size. I recently hit the same issue with the tracing crate, where a crate feature (for env var filtering) pulled in regex and my binary was suddenly 1MB bigger.
-
Debugging and profiling embedded applications.
I know about tools such as tracing, jaeger or tracy. While having a complete tracing could be a potential solution, these tools don't work with no_std.
-
Custom Axum Logging for Routes?
tracing by itself only outputs log data, you need to consume them in a subscriber, the tracing-subscriber crate exists for this. (example)
What are some alternatives?
kafka-python - Python client for Apache Kafka
log4rs - A highly configurable logging framework for Rust
rskafka - A minimal Rust client for Apache Kafka
slog - Structured, contextual, extensible, composable logging for Rust
librdkafka - The Apache Kafka C/C++ library
env_logger - A logging implementation for `log` which is configured via an environment variable.
console - Redpanda Console is a developer-friendly UI for managing your Kafka/Redpanda workloads. Console gives you a simple, interactive approach for gaining visibility into your topics, masking data, managing consumer groups, and exploring real-time data with time-travel debugging.
log - Logging implementation for Rust
templates - Repository for Dev Container Templates that are managed by Dev Container spec maintainers. See https://github.com/devcontainers/template-starter to create your own!
opentelemetry-rust - The Rust OpenTelemetry implementation
jr - JR: streaming quality random data from the command line
vector - A high-performance observability data pipeline.