Kryo
Java binary serialization and cloning: fast, efficient, automatic (by EsotericSoftware)
Apache Avro
Apache Avro is a data serialization system. (by apache)
Our great sponsors
Kryo | Apache Avro | |
---|---|---|
4 | 22 | |
6,036 | 2,744 | |
0.7% | 1.6% | |
8.3 | 9.7 | |
11 days ago | about 21 hours ago | |
HTML | Java | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Kryo
Posts with mentions or reviews of Kryo.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-16.
-
gRPC on the client side
Other serialization alternatives have a schema validation option: e.g., Avro, Kryo and Protocol Buffers. Interestingly enough, gRPC uses Protobuf to offer RPC across distributed components:
-
Marshaling objects in modern Java
If you need something quick and dirty to replace the default java serialization with zero configuration needed, use Kryo
- Downsides to using sun.misc.unsafe for serialization (assuming the code is thoroughly-tested)?
Apache Avro
Posts with mentions or reviews of Apache Avro.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-01-14.
-
Generating Avro Schemas from Go types
The most common format for describing schema in this scenario is Apache Avro.
- The state of Apache Avro in Rust
- How people generate examples for multiple programming languages?
-
gRPC on the client side
Other serialization alternatives have a schema validation option: e.g., Avro, Kryo and Protocol Buffers. Interestingly enough, gRPC uses Protobuf to offer RPC across distributed components:
-
Understanding Azure Event Hubs Capture
Apache Avro is a data serialization system, for more information visit Apache Avro
-
In One Minute : Hadoop
Avro, a data serialization system based on JSON schemas.
- Protocol Buffer x JSON para serializaĆ§Ć£o de dados
-
Marshaling objects in modern Java
If binary format is OK, use Protocol Buffer or Avro . Note that in the case of binary formats, you need a schema to serialize/de-serialize your data. Therefore, you'd probably want a schema registry to store all past and present schemas for later usage.
-
How-to-Guide: Contributing to Open Source
Apache Avro
-
How should I handle storing and reading from large amounts of data in my project?
Maybe it will be simpler to serialise all the data in a more compact data format, such as avro (its readme is in here), a row based format that seems to be able to use zstd/bzip/xz.
What are some alternatives?
When comparing Kryo and Apache Avro you can also consider the following projects:
FST - FST: fast java serialization drop in-replacement
Protobuf - Protocol Buffers - Google's data interchange format
FlatBuffers - FlatBuffers: Memory Efficient Serialization Library
MessagePack - MessagePack serializer implementation for Java / msgpack.org[Java]
SBE - Simple Binary Encoding (SBE) - High Performance Message Codec
protostuff - Java serialization library, proto compiler, code generator
Apache Thrift - Apache Thrift
iceberg - Apache Iceberg
Apache Parquet - Apache Parquet
gRPC - The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#)
Apache Orc - Apache ORC - the smallest, fastest columnar storage for Hadoop workloads
hudi - Upserts, Deletes And Incremental Processing on Big Data.