demo-scene
kafka-connect-twitter
Our great sponsors
demo-scene | kafka-connect-twitter | |
---|---|---|
24 | 1 | |
1,453 | 126 | |
1.2% | - | |
4.3 | 0.0 | |
5 days ago | over 1 year ago | |
Shell | Java | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
demo-scene
-
Confused and frustrated about Kafka
Here’s a great place to dig into things: https://developer.confluent.io
-
Getting Started as a Kafka Developer
Once you’ve decided on a language to focus on, it’s time to start filling those knowledge gaps. Don’t be discouraged by this step. We all have knowledge gaps, and filling them can be very rewarding. First, you’ll want to make sure you have a good understanding of Kafka basics. Fortunately, there are many resources to help you with this. A web search will turn up many great books and other resources. And, of course, Confluent Developer offers interactive courses ranging from introductory (Apache Kafka 101) to advanced (Kafka Internals), full documentation, and other content to help you get started.
- Any suggestions for writing a long thesis about Kafka's scalability, availability and fault-tolerance?
-
What Are Apache Kafka Consumer Group IDs?
Confluent Developer: Learn Apache Kafka through Confluent Developer tutorials, documentation, courses, blog posts, and examples.
-
Problems with the set-up
Spend some time on https://developer.confluent.io , especially https://developer.confluent.io/get-started/java/ and look at https://github.com/confluentinc/examples . Those will give really practical examples for how to set up your configs and start consuming
-
Looking For Suggestions on The Definitive Guide (V2)
FWIW you'll find a bunch of promo codes over at developer.confluent.io
-
Kafka to fetch data from another microservices
Kafka streams is a Java library built on the producer/consumer APIs designed to create event-driven microservices on top of Kafka. In general, I recommend exploring https://developer.confluent.io to learn more
-
Creating topics and console consumers using a dockerized Kafka cluster?
For example: https://github.com/confluentinc/demo-scene/tree/master/kafka-connect-zero-to-hero
-
A few starter questions: What is a good setup for learning? Is Confluent platform ok?
should I be reading a different material for a first Kafka project and working with a different kind of setup? Now, I can't be unbiased on this one ;) One of the things we're doing with Confluent Developer is to try and create a resource for people to learn Kafka from the ground up, whether they ultimately decide to pursue it on Confluent or not. The fundamentals of Kafka that you'll be learning are going to be as applicable whether you're using Apache Kafka self-managed, or Confluent, or AWS' MSK, or whatever else. Personally I'd this stage I'd use whatever setup you find easiest and least friction to your learning journey. As u/louisvell mentioned, /u/stephanemaarek's courses on Udemy are also very popular, if you wanted a "second opinion" on how to approach learning Kafka.
-
Kafka Learning Path
https://developer.confluent.io they’ve thought about this question very deeply
kafka-connect-twitter
-
A few starter questions: What is a good setup for learning? Is Confluent platform ok?
I'm reading O'Reilly's "Mastering Kafka Streams and ksqlDB" to start learning Kafka, it was suggested for me on an ad by Confluent. Unsurprisingly it uses Confluent's software throughout the book. One of the first projects is a simple app that does sentiment analysis on tweets. The book uses kafka-console-producer and a sample .json file for the tweets, but for my app I wanted to read actual tweets. To do that I've been reading about Kafka Connect and looking at this repository, but I'm having a hard time understating how to best deploy this for my local setup. So far I've been using docker-compose.yml files provided by the book, which in turn uses Confluent's docker images for kafka, zookeeper, etc. As for this Twitter Connect repository, it seems the recommended way of setting it up is to use Confluent's platform and its CLI tool to automagically install it, which is fine, but I wanted to learn how things work under the hood (to some extend) and if possible not rely so heavily upon Confluent's software. Is it a good idea to just stick with Confluent and the book, or should I be reading a different material for a first Kafka project and working with a different kind of setup? Perhaps I'm getting ahead of myself trying to use Kafka Connect at this point?
What are some alternatives?
docker-kafka-kraft - Apache Kafka Docker image using Kafka Raft metadata mode (KRaft). https://hub.docker.com/r/moeenz/docker-kafka-kraft
kafka-local - Run Local Kafka with Docker Compose
cp-all-in-one - docker-compose.yml files for cp-all-in-one , cp-all-in-one-community, cp-all-in-one-cloud, Apache Kafka Confluent Platform
kafka-connect-elasticsearch - Kafka Connect Elasticsearch connector
kafka-lag-exporter - Monitor Kafka Consumer Group Latency with Kafka Lag Exporter
debezium - Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.
protoactor-go - Proto Actor - Ultra fast distributed actors for Go, C# and Java/Kotlin
ksql - The database purpose-built for stream processing applications.
examples - Apache Kafka and Confluent Platform examples and demos
mongo-kafka - MongoDB Kafka Connector
kafka-connect-opensky - Kafka Source Connector reading in from the OpenSky API
confluent-kafka-python - Confluent's Kafka Python Client