A few starter questions: What is a good setup for learning? Is Confluent platform ok?

This page summarizes the projects mentioned and recommended in the original post on /r/apachekafka

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • demo-scene

    👾Scripts and samples to support Confluent Demos and Talks. ⚠️Might be rough around the edges ;-) 👉For automated tutorials and QA'd code, see https://github.com/confluentinc/examples/

  • should I be reading a different material for a first Kafka project and working with a different kind of setup? Now, I can't be unbiased on this one ;) One of the things we're doing with Confluent Developer is to try and create a resource for people to learn Kafka from the ground up, whether they ultimately decide to pursue it on Confluent or not. The fundamentals of Kafka that you'll be learning are going to be as applicable whether you're using Apache Kafka self-managed, or Confluent, or AWS' MSK, or whatever else. Personally I'd this stage I'd use whatever setup you find easiest and least friction to your learning journey. As u/louisvell mentioned, /u/stephanemaarek's courses on Udemy are also very popular, if you wanted a "second opinion" on how to approach learning Kafka.

  • kafka-connect-twitter

    Kafka Connect connector to stream data in real time from Twitter.

  • I'm reading O'Reilly's "Mastering Kafka Streams and ksqlDB" to start learning Kafka, it was suggested for me on an ad by Confluent. Unsurprisingly it uses Confluent's software throughout the book. One of the first projects is a simple app that does sentiment analysis on tweets. The book uses kafka-console-producer and a sample .json file for the tweets, but for my app I wanted to read actual tweets. To do that I've been reading about Kafka Connect and looking at this repository, but I'm having a hard time understating how to best deploy this for my local setup. So far I've been using docker-compose.yml files provided by the book, which in turn uses Confluent's docker images for kafka, zookeeper, etc. As for this Twitter Connect repository, it seems the recommended way of setting it up is to use Confluent's platform and its CLI tool to automagically install it, which is fine, but I wanted to learn how things work under the hood (to some extend) and if possible not rely so heavily upon Confluent's software. Is it a good idea to just stick with Confluent and the book, or should I be reading a different material for a first Kafka project and working with a different kind of setup? Perhaps I'm getting ahead of myself trying to use Kafka Connect at this point?

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • kafka-local

    Run Local Kafka with Docker Compose

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts