kafka-connect-cosmosdb
Apache Avro
Our great sponsors
kafka-connect-cosmosdb | Apache Avro | |
---|---|---|
2 | 22 | |
44 | 2,753 | |
- | 1.3% | |
8.3 | 9.7 | |
about 1 month ago | 6 days ago | |
Java | Java | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kafka-connect-cosmosdb
-
Critical New 0-day Vulnerability in Popular Log4j Library - List of applications
Kafka Connect CosmosDB : https://github.com/microsoft/kafka-connect-cosmosdb/blob/0f5d0c9dbf2812400bb480d1ff0672dfa6bb56f0/CHANGELOG.md
-
Getting started with Kafka Connector for Azure Cosmos DB using Docker
The Azure Cosmos DB connector allows you to move data between Azure Cosmos DB and Kafka. It’s available as a source as well as a sink. The Azure Cosmos DB Sink connector writes data from a Kafka topic to an Azure Cosmos DB container and the Source connector writes changes from an Azure Cosmos DB container to a Kafka topic. At the time of writing, the connector is in pre-production mode. You can read more about it on the GitHub repo or install/download it from the Confluent Hub.
Apache Avro
-
Generating Avro Schemas from Go types
The most common format for describing schema in this scenario is Apache Avro.
- The state of Apache Avro in Rust
- How people generate examples for multiple programming languages?
-
gRPC on the client side
Other serialization alternatives have a schema validation option: e.g., Avro, Kryo and Protocol Buffers. Interestingly enough, gRPC uses Protobuf to offer RPC across distributed components:
-
Understanding Azure Event Hubs Capture
Apache Avro is a data serialization system, for more information visit Apache Avro
-
In One Minute : Hadoop
Avro, a data serialization system based on JSON schemas.
- Protocol Buffer x JSON para serialização de dados
-
Marshaling objects in modern Java
If binary format is OK, use Protocol Buffer or Avro . Note that in the case of binary formats, you need a schema to serialize/de-serialize your data. Therefore, you'd probably want a schema registry to store all past and present schemas for later usage.
-
How-to-Guide: Contributing to Open Source
Apache Avro
-
How should I handle storing and reading from large amounts of data in my project?
Maybe it will be simpler to serialise all the data in a more compact data format, such as avro (its readme is in here), a row based format that seems to be able to use zstd/bzip/xz.
What are some alternatives?
Protobuf - Protocol Buffers - Google's data interchange format
SBE - Simple Binary Encoding (SBE) - High Performance Message Codec
Apache Thrift - Apache Thrift
iceberg - Apache Iceberg
Apache Parquet - Apache Parquet
gRPC - The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#)
Apache Orc - Apache ORC - the smallest, fastest columnar storage for Hadoop workloads
hudi - Upserts, Deletes And Incremental Processing on Big Data.
Wire - gRPC and protocol buffers for Android, Kotlin, Swift and Java.
Big Queue - A big, fast and persistent queue based on memory mapped file.
tape - A lightning fast, transactional, file-based FIFO for Android and Java.
delta - An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs