kafka-connect-elasticsearch
kafka-connect-twitter
Our great sponsors
kafka-connect-elasticsearch | kafka-connect-twitter | |
---|---|---|
1 | 1 | |
744 | 126 | |
0.5% | - | |
8.6 | 0.0 | |
about 20 hours ago | over 1 year ago | |
Java | Java | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kafka-connect-elasticsearch
-
Vinted Search Scaling Chapter 1: Indexing
Kafka Connect is a scalable and reliable tool for streaming data between Apache Kafka and other systems. It allows to quickly define connectors that move data into and out of Kafka. Luckily for us, there is an open-source connector that sends data from Kafka topics to Elasticsearch indices.
kafka-connect-twitter
-
A few starter questions: What is a good setup for learning? Is Confluent platform ok?
I'm reading O'Reilly's "Mastering Kafka Streams and ksqlDB" to start learning Kafka, it was suggested for me on an ad by Confluent. Unsurprisingly it uses Confluent's software throughout the book. One of the first projects is a simple app that does sentiment analysis on tweets. The book uses kafka-console-producer and a sample .json file for the tweets, but for my app I wanted to read actual tweets. To do that I've been reading about Kafka Connect and looking at this repository, but I'm having a hard time understating how to best deploy this for my local setup. So far I've been using docker-compose.yml files provided by the book, which in turn uses Confluent's docker images for kafka, zookeeper, etc. As for this Twitter Connect repository, it seems the recommended way of setting it up is to use Confluent's platform and its CLI tool to automagically install it, which is fine, but I wanted to learn how things work under the hood (to some extend) and if possible not rely so heavily upon Confluent's software. Is it a good idea to just stick with Confluent and the book, or should I be reading a different material for a first Kafka project and working with a different kind of setup? Perhaps I'm getting ahead of myself trying to use Kafka Connect at this point?
What are some alternatives?
Elasticsearch - Free and Open, Distributed, RESTful Search Engine
ksql - The database purpose-built for stream processing applications.
kafka-connect-file-pulse - 🔗 A multipurpose Kafka Connect connector that makes it easy to parse, transform and stream any file, in any format, into Apache Kafka
ksql-udf-deep-learning-mqtt-iot - Deep Learning UDF for KSQL for Streaming Anomaly Detection of MQTT IoT Sensor Data
kafka-connect-cosmosdb - Kafka Connect connectors for Azure Cosmos DB
kafka-connect-transform-xml - Transformation for converting XML data to Structured data.
kafka-connect-jdbc - Kafka Connect connector for JDBC-compatible databases
kafka-local - Docker Compose configuration to run Kafka locally.
kafka-rest - Confluent REST Proxy for Kafka
snowflake-kafka-connector - Snowflake Kafka Connector (Sink Connector)
demo-scene - 👾Scripts and samples to support Confluent Demos and Talks. ⚠️Might be rough around the edges ;-) 👉For automated tutorials and QA'd code, see https://github.com/confluentinc/examples/
kryptonite-for-kafka - Kryptonite for Kafka is a client-side 🔒 field level 🔓 cryptography library for Apache Kafka® offering a Kafka Connect SMT, ksqlDB UDFs, and a standalone HTTP API service. It's an ! UNOFFICIAL ! community project