librdkafka
schema-registry
Our great sponsors
librdkafka | schema-registry | |
---|---|---|
18 | 7 | |
7,292 | 2,138 | |
1.2% | 1.4% | |
8.3 | 10.0 | |
4 days ago | 2 days ago | |
C | Java | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
librdkafka
-
Do you use Rust in your professional career?
recent PR: https://github.com/confluentinc/librdkafka/pull/4275
-
JR, quality Random Data from the Command line, part I
# Kafka configuration # https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md bootstrap.servers= security.protocol=SASL_SSL sasl.mechanisms=PLAIN sasl.username= sasl.password= compression.type=gzip compression.level=9 statistics.interval.ms=1000
-
A Critical Detail about Kafka Partitioners
But what about Kafka producer clients in other languages? The excellent librdkafka project is a C/C++ implementation of Kafka clients and is widely used for non-JVM Kafka applications. Additionally, Kafka clients in other languages (Python, C#) build on top of it. The default partitioner for librdkafka uses the CRC32 hash function to get the correct partition for a key.
-
Horizontally scaling Kafka consumers with rendezvous hashing
We could have made some changes at the librdkafka level (see this), but we didn’t really want to pursue this (at least not yet).
-
Events with same key going to different partitions
You want records with the same key to always land on the same partition, so you need all the clients to use the same hashing algorithm. The easiest way to do that is to make sure the librdkafka client uses the java compatible murmur2_random hash algorithm. See “Partitioner” section here: https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md
-
Getting sum type values from a map
As my first "real world" (ish) project in Vlang, I'm trying to copy https://github.com/confluentinc/confluent-kafka-go, which is a Go wrapper for Kafka C client library, https://github.com/edenhill/librdkafka
-
Installing node-rdkafka on M1 for use with SASL
If you're using Kafka in a Node.js app, it's likely that you'll need node-rdkafka. This is a library that wraps the librdkafka library and makes it available in Node.js. According to the project's README, "All the complexity of balancing writes across partitions and managing (possibly ever-changing) brokers should be encapsulated in the library."
-
Introduction to Key Apache KafkaⓇ Concepts
# Parse the configuration. # See https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md config_parser = ConfigParser() config_parser.read_file(args.config_file) config = dict(config_parser['default']) # Create Producer instance producer = Producer(config)
-
video analytics on edge
• git clone https://github.com/edenhill/librdkafka.git
- librdkafka - the Apache Kafka C/C++ client library
schema-registry
-
JR, quality Random Data from the Command line, part I
So, is JR yet another faking library written in Go? Yes and no. JR indeed implements most of the APIs in fakerjs and Go fake it, but it's also able to stream data directly to stdout, Kafka, Redis and more (Elastic and MongoDB coming). JR can talk directly to Confluent Schema Registry, manage json-schema and Avro schemas, easily maintain coherence and referential integrity. If you need more than what is OOTB in JR, you can also easily pipe your data streams to other cli tools like kcat thanks to its flexibility.
- What tool do you use to document your Kafka messages format?
-
How to handle failing message in a topic with Avro schema?
Check here for more details. https://github.com/confluentinc/schema-registry
-
What is Schema Registry and How Does It Work? [Explained]
Confluent Schema Registry for Apache Kafka [GitHub]
-
Testing a Kafka consumer with Avro schema messages in your Spring Boot application with Testcontainers
So that means we can configure the Kafka producer and consumer with an imaginary schema registry url, that only needs to start with “mock://” and you automatically get to work with the MockSchemaRegistryClient. This way you don't need to explicitly initiate the MockSchemaRegistryClient and configure everything accordingly. That also eradicates the need for the Confluent Schema Registry Container. Running the Kafka Testcontainer with the embedded Zookeeper, we no longer need an extra Zookeeper container and we are down to one Testcontainer for the messaging. This way I ended up with only two Testcontainers: Kafka and the database.
-
confluent Schema Registry and Rust
Confluent is a company founded by the creators of Apache Kafka. They are providing the Confluent Platform which consists of several components, all based on Kafka. The license for these components vary. The Schema Registry has the community-license, which basically means it's free to use as long as you don't offer the Schema Registry itself as a SaaS solution. The source code can be found on Github.
-
An Overview About the Different Kafka Connect Plugins
Schema Registry from Confluent (GitHub) => http://localhost:8081/
What are some alternatives?
CVE-2022-27254 - PoC for vulnerability in Honda's Remote Keyless System(CVE-2022-27254)
kafka-ui - Open-Source Web UI for Apache Kafka Management
sarama - Sarama is a Go library for Apache Kafka. [Moved to: https://github.com/IBM/sarama]
kafdrop - Kafka Web UI
Karafka - Ruby and Rails efficient multithreaded Kafka processing framework
schema-registry-gitops - Manage Confluent Schema Registry subjects through Infrastructure as code
kafka-go - Kafka library in Go
rust-rdkafka - A fully asynchronous, futures-based Kafka client library for Rust based on librdkafka
rsyslog - a Rocket-fast SYStem for LOG processing
kafka-avro-without-registry - Test Spring Kafka application (using Avro as a serialization mechanism) without the need for Confluent Schema Registry
rust-kafka-101 - Getting started with Rust and Kafka
Protobuf - Protocol Buffers - Google's data interchange format