Apache Avro
iceberg
Our great sponsors
Apache Avro | iceberg | |
---|---|---|
22 | 18 | |
2,764 | 5,508 | |
1.7% | 4.0% | |
9.7 | 9.9 | |
about 20 hours ago | 5 days ago | |
Java | Java | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Apache Avro
-
Open Table Formats Such as Apache Iceberg Are Inevitable for Analytical Data
Apache AVRO [1] is one but it has been largely replaced by Parquet [2] which is a hybrid row/columnar format
[1] https://avro.apache.org/
-
Generating Avro Schemas from Go types
The most common format for describing schema in this scenario is Apache Avro.
-
How do you update an existing avro schema using apache avro SchemaBuilder?
I am testing a new schema registry which loads and retrieves different kinds of avro schemas. In the process of testing, I need to create a bunch of different types of avro schemas. As it involves a lot of permutations, I decided to create the schema programmatically.I am using the apache avro SchemaBuilder to do so.
- The state of Apache Avro in Rust
- How people generate examples for multiple programming languages?
-
gRPC on the client side
Other serialization alternatives have a schema validation option: e.g., Avro, Kryo and Protocol Buffers. Interestingly enough, gRPC uses Protobuf to offer RPC across distributed components:
-
Understanding Azure Event Hubs Capture
Apache Avro is a data serialization system, for more information visit Apache Avro
-
tl;dr of Data Contracts
Once things like JSON became more popular Apache Avro appeared. You can define Avro files which can then be generated into Python, Java C, Ruby, etc.. classes.
-
In One Minute : Hadoop
Avro, a data serialization system based on JSON schemas.
-
Events: Fat or Thin?
Supporting multiple versions of an event schema is a solved problem. Apache Avro with a published schema hash in a message header is one solution.
https://avro.apache.org/
iceberg
- Iceberg won the table format war: But not in the way you thought it might
- Lakehouse using AWS Athena on Iceberg Concerns
- apache/iceberg: Apache Iceberg
- What are the main things I need to know to be hired as a Java developer?
- Have you used Athena Iceberg for small(-ish) data?
- Is Data Lakehouse a threat to Snowflake?
-
Snowflake vs databricks cloud/labor cost
This is interesting, imo.
- Setting the Table: Benchmarking Open Table Formats
-
Spark Dynamic Partition Overwrite Mode Replaces Existing Data
If you're using Iceberg as your table format, it had bugs with MERGE INTO with non-nullable columns up until September: https://github.com/apache/iceberg/pull/5679
-
How to migrate delta tables to iceberg?
yeah, this as a capability is a WIP and discussion point in the iceberg community - https://github.com/apache/iceberg/pull/5331
What are some alternatives?
Protobuf - Protocol Buffers - Google's data interchange format
kudu - Mirror of Apache Kudu
SBE - Simple Binary Encoding (SBE) - High Performance Message Codec
hudi - Upserts, Deletes And Incremental Processing on Big Data.
Apache Thrift - Apache Thrift
debezium - Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.
Apache Parquet - Apache Parquet
RocksDB - A library that provides an embeddable, persistent key-value store for fast storage.
gRPC - The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#)
delta - An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs
Apache Orc - Apache ORC - the smallest, fastest columnar storage for Hadoop workloads
Dask - Parallel computing with task scheduling