kafka-local
kafka-connect-twitter
Our great sponsors
kafka-local | kafka-connect-twitter | |
---|---|---|
7 | 1 | |
10 | 126 | |
- | - | |
7.8 | 0.0 | |
30 days ago | over 1 year ago | |
Shell | Java | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kafka-local
- Looking For Suggestions on The Definitive Guide (V2)
- Kafka Streams application doesn't start up
- Problems Installing Kafka for learning purposes
- A few starter questions: What is a good setup for learning? Is Confluent platform ok?
- Kafka Consumer inside a Dockerized React App is not working
-
I have a docker-compose.yml that already has zookeeper and kafka and it works but I need help to add kafka connect to the compose
We use confluent/cp-kakfa-* images for our docker-compose setup so our config is slightly different (see: yaml for a 3-node SASL authenticated cluster).
-
Developing Kafka Streams applications
Our setup is open-sourced here: https://github.com/operatr-io/kafka-local
kafka-connect-twitter
-
A few starter questions: What is a good setup for learning? Is Confluent platform ok?
I'm reading O'Reilly's "Mastering Kafka Streams and ksqlDB" to start learning Kafka, it was suggested for me on an ad by Confluent. Unsurprisingly it uses Confluent's software throughout the book. One of the first projects is a simple app that does sentiment analysis on tweets. The book uses kafka-console-producer and a sample .json file for the tweets, but for my app I wanted to read actual tweets. To do that I've been reading about Kafka Connect and looking at this repository, but I'm having a hard time understating how to best deploy this for my local setup. So far I've been using docker-compose.yml files provided by the book, which in turn uses Confluent's docker images for kafka, zookeeper, etc. As for this Twitter Connect repository, it seems the recommended way of setting it up is to use Confluent's platform and its CLI tool to automagically install it, which is fine, but I wanted to learn how things work under the hood (to some extend) and if possible not rely so heavily upon Confluent's software. Is it a good idea to just stick with Confluent and the book, or should I be reading a different material for a first Kafka project and working with a different kind of setup? Perhaps I'm getting ahead of myself trying to use Kafka Connect at this point?
What are some alternatives?
demo-scene - 👾Scripts and samples to support Confluent Demos and Talks. ⚠️Might be rough around the edges ;-) 👉For automated tutorials and QA'd code, see https://github.com/confluentinc/examples/
jq - Command-line JSON processor [Moved to: https://github.com/jqlang/jq]
kafka-connect-elasticsearch - Kafka Connect Elasticsearch connector
fast-data-dev - Kafka Docker for development. Kafka, Zookeeper, Schema Registry, Kafka-Connect, Landoop Tools, 20+ connectors
debezium - Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.
cp-docker-images - [DEPRECATED] Docker images for Confluent Platform.
ksql - The database purpose-built for stream processing applications.
mongo-kafka - MongoDB Kafka Connector