arrow-rs
rust_serialization_benchmark
arrow-rs | rust_serialization_benchmark | |
---|---|---|
16 | 22 | |
2,198 | 514 | |
3.4% | - | |
9.8 | 7.7 | |
2 days ago | 7 days ago | |
Rust | Rust | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
arrow-rs
-
Rkyv: Rkyv zero-copy deserialization framework for rust
https://github.com/djkoloski/rust_serialization_benchmark
Apache/arrow-rs: https://github.com/apache/arrow-rs
From https://arrow.apache.org/faq/ :
> How does Arrow relate to Flatbuffers?
> Flatbuffers is a low-level building block for binary data serialization. It is not adapted to the representation of large, structured, homogenous data, and does not sit at the right abstraction layer for data analysis tasks.
> Arrow is a data layer aimed directly at the needs of data analysis, providing a comprehensive collection of data types required to analytics, built-in support for “null” values (representing missing data), and an expanding toolbox of I/O and computing facilities.
> The Arrow file format does use Flatbuffers under the hood to serialize schemas and other metadata needed to implement the Arrow binary IPC protocol, but the Arrow data format uses its own representation for optimal access and computation
-
Polars: Company Formation Announcement
One of the interesting components of Polars that I've been watching is the use of the Apache Arrow memory format, which is a standard layout for data in memory that enables processing (querying, iterating, calculating, etc) in a language agnostic way, in particular without having to copy/convert it into the local object format first. This enables cross-language data access by mmaping or transferring a single buffer, with zero [de]serialization overhead.
For some history, there's has been a bit of contention between the official arrow-rs implementation and the arrow2 implementation created by the polars team which includes some extra features that they find important. I think the current status is that everyone agrees that having two crates that implement the same standard is not ideal, and they are working to port any necessary features to the arrow-rs crate and plan on eventually switching to it and deprecating arrow2. But that's not easy.
https://github.com/apache/arrow-rs/issues/1176
https://github.com/jorgecarleitao/arrow2/pull/1476
-
InfluxDB 3.0 System Architecture
It's built around the arrow-rs library, which we've contributed to significantly: https://github.com/apache/arrow-rs
-
best cache type for 5gb size tables
For loading Parquet in memory, probably worth a look at arrow-rs.
-
The state of Apache Avro in Rust
From what I've seen, most of the Rust community seems to be adopting Apache Arrow as the go-to for data processing. It has strong community support and good interoperability with many cross-language tools. It is natively a columnar format. If row-oriented is a must for your use case, consider looking into alternatives like gRPC that might better suit your needs.
- Arrow-Rs - Official Rust implementation of Apache Arrow
-
Apache Arrow Feature Parity Timeline?
That matrix doesn't seem up to date. For example looking at the rust crate it does seem to support things like map, float16, and IPC. The changelog shows an impressive development pace.
-
Apache Arrow Flight SQL: Accelerating Database Access
Oh, and for anyone interested in pitching in on the Rust implementation, there's an issue logged here along with some discussion: https://github.com/apache/arrow-rs/issues/1323
-
February 2022 Rust Apache Arrow and Parquet Highlights
There is more discussion about the decision here: https://github.com/apache/arrow-rs/issues/1120
-
Arrow2 0.9 has been released
I'm still not sure how this differs from https://github.com/apache/arrow-rs. What does transmute even mean?
rust_serialization_benchmark
-
Rkyv: Rkyv zero-copy deserialization framework for rust
https://github.com/djkoloski/rust_serialization_benchmark
Apache/arrow-rs: https://github.com/apache/arrow-rs
From https://arrow.apache.org/faq/ :
> How does Arrow relate to Flatbuffers?
> Flatbuffers is a low-level building block for binary data serialization. It is not adapted to the representation of large, structured, homogenous data, and does not sit at the right abstraction layer for data analysis tasks.
> Arrow is a data layer aimed directly at the needs of data analysis, providing a comprehensive collection of data types required to analytics, built-in support for “null” values (representing missing data), and an expanding toolbox of I/O and computing facilities.
> The Arrow file format does use Flatbuffers under the hood to serialize schemas and other metadata needed to implement the Arrow binary IPC protocol, but the Arrow data format uses its own representation for optimal access and computation
-
Comfy Engine 0.3 - No Lifetimes, User Shaders, Text Rendering, 2.5D, LDTK
Nice that comfy gets even easier. Also, if serde's compile time is an issue, then there's nanoserde which is usually much much faster according to benchmarks
-
Müsli - An experimental binary serialization framework with more choice
A note on performance and size: Some benchmarks and statistics are included in the README. But only because people will be curious. I've based my methodology on rust_serialization_benchmark, but decided to not extend it (for now) since it seems to exclude any Rust types which are not widely supported by all formats being tested (like HashMap's and 128-bit numbers). The test suite is already quite nice if you want to take it for a spin.
-
bitcode 0.4 release - binary serialization format
While we haven't benchmarked either of those ourselves. You can checkout rust_serialization_benchmark which has protobuf under the name prost.
-
Announcing bitcode format for serde
Update: Benchmark PR submitted: https://github.com/djkoloski/rust_serialization_benchmark/pull/37
-
Best format for high-performance Serde?
Here is a speed and size benchmark of different rust binary serialization formats: https://github.com/djkoloski/rust_serialization_benchmark Warning: I think the creator of this benchmark is also the creator of rkyv, one of the best positioned formats in the benchmark.
-
Grammatical, automatic furigana with SQLite and Rust
So I assume you're deserializing them before processing the book? If so then if you want an easy speed-up you could also take a look at these benchmarks and pick a faster serialization crate. (: (Although you might or might not get a big speedup; depends on what exactly you're deserializing and how much you are deserializing.)
-
GitHub - epage/parse-benchmarks-rs
You can add the rust serialization benchmark to that list
-
The run-up to v1.0 for Postcard
Hey! Similar to bincode, it provides a very similar, compact binary format. The rkyv benchmark is the most comprehensive I'm aware of, but compared to bincode, postcard is generally a similar speed for serialization or deserialization (maybe a touch slower), but generally produces a slightly smaller "on the wire" size.
-
I made a blazing fast and small new data serialization format called "DLHN" in Rust.
You should add your crate to these benchmarks. (Which are, AFAIK, the most comprehensive set of benchmarks currently available for Rust serialization libraries.)
What are some alternatives?
polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust
json-benchmark - nativejson-benchmark in Rust
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
rust-serialization-benchmarks
arrow2 - Transmute-free Rust library to work with the Arrow format
bebop - 🎷No ceremony, just code. Blazing fast, typesafe binary serialization.
datafusion - Apache DataFusion SQL Query Engine
unsafe-code-guidelines - Forum for discussion about what unsafe code can and can't do
byo-sql - An in-memory SQL database in Rust.
dlhn - DLHN implementation for Rust
db-benchmark - reproducible benchmark of database-like ops
bincode - A binary encoder / decoder implementation in Rust.