kafka-connect-twitter
mongo-kafka
kafka-connect-twitter | mongo-kafka | |
---|---|---|
1 | 2 | |
126 | 324 | |
- | 0.9% | |
0.0 | 6.7 | |
over 1 year ago | 5 days ago | |
Java | Java | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kafka-connect-twitter
-
A few starter questions: What is a good setup for learning? Is Confluent platform ok?
I'm reading O'Reilly's "Mastering Kafka Streams and ksqlDB" to start learning Kafka, it was suggested for me on an ad by Confluent. Unsurprisingly it uses Confluent's software throughout the book. One of the first projects is a simple app that does sentiment analysis on tweets. The book uses kafka-console-producer and a sample .json file for the tweets, but for my app I wanted to read actual tweets. To do that I've been reading about Kafka Connect and looking at this repository, but I'm having a hard time understating how to best deploy this for my local setup. So far I've been using docker-compose.yml files provided by the book, which in turn uses Confluent's docker images for kafka, zookeeper, etc. As for this Twitter Connect repository, it seems the recommended way of setting it up is to use Confluent's platform and its CLI tool to automagically install it, which is fine, but I wanted to learn how things work under the hood (to some extend) and if possible not rely so heavily upon Confluent's software. Is it a good idea to just stick with Confluent and the book, or should I be reading a different material for a first Kafka project and working with a different kind of setup? Perhaps I'm getting ahead of myself trying to use Kafka Connect at this point?
mongo-kafka
-
Difficulty configuring log4j when deploying code as plugin for an app
I am working on a custom Kafka-Mongo sink connector (specifically, a custom WriteModelStrategy to be used with the official Mongo sink connector here: https://github.com/mongodb/mongo-kafka ). My code is not a standalone, executable Java application but rather a JAR that augments the functionality of another Java application.
-
Database Replication using CDC
To create a connection between MongoDB and Apache Kafka, MongoDB has build official framework MongoDB Kafka Connector.
What are some alternatives?
kafka-local - Run Local Kafka with Docker Compose
kafka-connect-file-pulse - 🔗 A multipurpose Kafka Connect connector that makes it easy to parse, transform and stream any file, in any format, into Apache Kafka
demo-scene - 👾Scripts and samples to support Confluent Demos and Talks. ⚠️Might be rough around the edges ;-) 👉For automated tutorials and QA'd code, see https://github.com/confluentinc/examples/
MongoDB - The MongoDB Database
kafka-connect-elasticsearch - Kafka Connect Elasticsearch connector
debezium - Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.
kafka-connect-cosmosdb - Kafka Connect connectors for Azure Cosmos DB
ksql - The database purpose-built for stream processing applications.
Hazelcast Jet - Distributed Stream and Batch Processing
kryptonite-for-kafka - Kryptonite for Kafka is a client-side 🔒 field level 🔓 cryptography library for Apache Kafka® offering a Kafka Connect SMT, ksqlDB UDFs, and a standalone HTTP API service. It's an ! UNOFFICIAL ! community project
java-11-examples - JDK 11 examples and demo projects.