librdkafka
Confluent Kafka Golang Client
Our great sponsors
librdkafka | Confluent Kafka Golang Client | |
---|---|---|
18 | 12 | |
7,292 | 4,426 | |
1.2% | 1.9% | |
8.3 | 8.1 | |
4 days ago | 5 days ago | |
C | HTML | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
librdkafka
-
Do you use Rust in your professional career?
recent PR: https://github.com/confluentinc/librdkafka/pull/4275
-
JR, quality Random Data from the Command line, part I
# Kafka configuration # https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md bootstrap.servers= security.protocol=SASL_SSL sasl.mechanisms=PLAIN sasl.username= sasl.password= compression.type=gzip compression.level=9 statistics.interval.ms=1000
-
A Critical Detail about Kafka Partitioners
But what about Kafka producer clients in other languages? The excellent librdkafka project is a C/C++ implementation of Kafka clients and is widely used for non-JVM Kafka applications. Additionally, Kafka clients in other languages (Python, C#) build on top of it. The default partitioner for librdkafka uses the CRC32 hash function to get the correct partition for a key.
-
Horizontally scaling Kafka consumers with rendezvous hashing
We could have made some changes at the librdkafka level (see this), but we didn’t really want to pursue this (at least not yet).
-
Events with same key going to different partitions
You want records with the same key to always land on the same partition, so you need all the clients to use the same hashing algorithm. The easiest way to do that is to make sure the librdkafka client uses the java compatible murmur2_random hash algorithm. See “Partitioner” section here: https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md
-
Getting sum type values from a map
As my first "real world" (ish) project in Vlang, I'm trying to copy https://github.com/confluentinc/confluent-kafka-go, which is a Go wrapper for Kafka C client library, https://github.com/edenhill/librdkafka
-
Installing node-rdkafka on M1 for use with SASL
If you're using Kafka in a Node.js app, it's likely that you'll need node-rdkafka. This is a library that wraps the librdkafka library and makes it available in Node.js. According to the project's README, "All the complexity of balancing writes across partitions and managing (possibly ever-changing) brokers should be encapsulated in the library."
-
Introduction to Key Apache KafkaⓇ Concepts
# Parse the configuration. # See https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md config_parser = ConfigParser() config_parser.read_file(args.config_file) config = dict(config_parser['default']) # Create Producer instance producer = Producer(config)
-
video analytics on edge
• git clone https://github.com/edenhill/librdkafka.git
- librdkafka - the Apache Kafka C/C++ client library
Confluent Kafka Golang Client
-
book about golang and kafka
There are two main libraries that people use to write clients Confluent Kafka and segment io kafka
-
Getting sum type values from a map
As my first "real world" (ish) project in Vlang, I'm trying to copy https://github.com/confluentinc/confluent-kafka-go, which is a Go wrapper for Kafka C client library, https://github.com/edenhill/librdkafka
-
Fix it, Fork it, Fuck off
You are right, but in practice that's not what happens. Companies do not rely on open source libraries, the developers working for such companies do.
I can give you a realistic example. If you want to use Kafka and Go, your probably only option is to use https://github.com/confluentinc/confluent-kafka-go. Its LICENSE explicitly says "no warranty". Now, what if I find a bug in the library? Only two realistic solutions from my side:
1. I submit the issue and hope for the maintainers to fix it
2. I dig deeper and try to fix the issue. I submit the PR
None of the above scenarios are guaranteed to have a happy ending. The issue could be ignored, or piled up among thousand of other (maybe higher prio) issues. My solution may not be optimal and could be rejected (or if it's optimal, nobody is taking a look at it, and it could remain open for weeks/months).
> If that is a problem for you, negotiate a different contract up front - with the maintainer or someone else willing to do the work. That probably means paying them.
In the real world that would mean that I go to my manager and asks them to pay money to the maintainers of confluent-kafka-go to fix the issue I found. I don't think my manager would approve that, but let's imagine he does. The guys at confluent-kafka-go may not want money to fix the issue. These guys have probably already jobs that pay them well, and they work on the library at will.
Note: I'm talking about confluent-kafka-go, which I know is behind the Confluent software company. But I could as well be talking about libraries maintained by individuals like https://github.com/edenhill/librdkafka
-
What are Golang competitors in 2022 when it comes to one-file binary deployment?
it can be completely statically linked binaries. example: https://github.com/confluentinc/confluent-kafka-go/blob/db57ef6235/kafka/librdkafka_vendor/README.md
-
Benthos - Fancy stream processing made operationally mundane
If you find the kafka input slow, try kafka_franz. It might be a bit faster, since it’s based on https://github.com/twmb/franz-go. The kafka one is based on https://github.com/Shopify/sarama. You can also write a custom input based on https://github.com/confluentinc/confluent-kafka-go, but this library relies on CGo, which can be annoying.
-
Sharing event schema ( type ) between producer and a consumer
Last time I checked Confluent does not have a Schema Registry for Go, only for Java, so instead of that I rely on using the guidelines defined for the serialized data, specifically I've used gPRC+Protobuf for doing this, together with buf to detect breaking changes; buf has their own schema registry perhaps that could be something you could explore as well.
-
Hunting down a C memory leak in a Go program
So, in the interests of full transparency - we at Zendesk are actually running a fork of confluent-kafka-go, which I forked to add, amongst other things, context support: https://github.com/confluentinc/confluent-kafka-go/pull/626
This bug actually happened because I mis-merged upstream into our fork and missed an important call to rd_kafka_poll_set_consumer: https://github.com/zendesk/confluent-kafka-go/commit/6e2d889...
-
Create page view analytics system using Kafka, Go, Postgres & GraphQL in 5 steps
Setup Kafka Producer using confluent-kakfka-go
-
Is segmentio/kafka-go production ready ?
I'd suggest https://github.com/confluentinc/confluent-kafka-go we switched from sarama-cluster with minimal work and it works fine. And we process approx 1.2M messages per hour.
-
Go and Kafka
In my company we use this https://github.com/confluentinc/confluent-kafka-go,
What are some alternatives?
CVE-2022-27254 - PoC for vulnerability in Honda's Remote Keyless System(CVE-2022-27254)
sarama - Sarama is a Go library for Apache Kafka. [Moved to: https://github.com/IBM/sarama]
kafka-go - Kafka library in Go
Karafka - Ruby and Rails efficient multithreaded Kafka processing framework
Centrifugo - Scalable real-time messaging server in a language-agnostic way. Self-hosted alternative to Pubnub, Pusher, Ably. Set up once and forever.
goka - Goka is a compact yet powerful distributed stream processing library for Apache Kafka written in Go.
rsyslog - a Rocket-fast SYStem for LOG processing
Benthos - Fancy stream processing made operationally mundane
rust-kafka-101 - Getting started with Rust and Kafka
confluent-kafka-python - Confluent's Kafka Python Client