borrowme
rust_serialization_benchmark
borrowme | rust_serialization_benchmark | |
---|---|---|
3 | 22 | |
48 | 546 | |
- | - | |
6.0 | 7.7 | |
17 days ago | about 1 month ago | |
Rust | Rust | |
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
borrowme
-
Müsli - An experimental binary serialization framework with more choice
"Borrow heuristics" saves a lot of configuration. It's basically just the macro analyzing the type signature of fields for references and with that it can do the right thing 95% of the time. If those lifetimes weren't there I suspect it would mean having to use a helluvalot of attributes to make up for the lack of markup doing it the other way around (think of cases where there's multiple lifetimes). Unclear how it should work for nested types too, like how you'd want Vec> to be a Vec>. Some strange attribute would be needed I think.
-
borrowme 0.0.10 - the missing compound borrowing for Rust
I noted in my comment above that I was working on it. Here's the current PR.
rust_serialization_benchmark
-
Rkyv: Rkyv zero-copy deserialization framework for rust
https://github.com/djkoloski/rust_serialization_benchmark
Apache/arrow-rs: https://github.com/apache/arrow-rs
From https://arrow.apache.org/faq/ :
> How does Arrow relate to Flatbuffers?
> Flatbuffers is a low-level building block for binary data serialization. It is not adapted to the representation of large, structured, homogenous data, and does not sit at the right abstraction layer for data analysis tasks.
> Arrow is a data layer aimed directly at the needs of data analysis, providing a comprehensive collection of data types required to analytics, built-in support for “null” values (representing missing data), and an expanding toolbox of I/O and computing facilities.
> The Arrow file format does use Flatbuffers under the hood to serialize schemas and other metadata needed to implement the Arrow binary IPC protocol, but the Arrow data format uses its own representation for optimal access and computation
-
Comfy Engine 0.3 - No Lifetimes, User Shaders, Text Rendering, 2.5D, LDTK
Nice that comfy gets even easier. Also, if serde's compile time is an issue, then there's nanoserde which is usually much much faster according to benchmarks
-
Müsli - An experimental binary serialization framework with more choice
A note on performance and size: Some benchmarks and statistics are included in the README. But only because people will be curious. I've based my methodology on rust_serialization_benchmark, but decided to not extend it (for now) since it seems to exclude any Rust types which are not widely supported by all formats being tested (like HashMap's and 128-bit numbers). The test suite is already quite nice if you want to take it for a spin.
-
bitcode 0.4 release - binary serialization format
While we haven't benchmarked either of those ourselves. You can checkout rust_serialization_benchmark which has protobuf under the name prost.
-
Announcing bitcode format for serde
Update: Benchmark PR submitted: https://github.com/djkoloski/rust_serialization_benchmark/pull/37
-
Best format for high-performance Serde?
Here is a speed and size benchmark of different rust binary serialization formats: https://github.com/djkoloski/rust_serialization_benchmark Warning: I think the creator of this benchmark is also the creator of rkyv, one of the best positioned formats in the benchmark.
-
Grammatical, automatic furigana with SQLite and Rust
So I assume you're deserializing them before processing the book? If so then if you want an easy speed-up you could also take a look at these benchmarks and pick a faster serialization crate. (: (Although you might or might not get a big speedup; depends on what exactly you're deserializing and how much you are deserializing.)
-
GitHub - epage/parse-benchmarks-rs
You can add the rust serialization benchmark to that list
-
The run-up to v1.0 for Postcard
Hey! Similar to bincode, it provides a very similar, compact binary format. The rkyv benchmark is the most comprehensive I'm aware of, but compared to bincode, postcard is generally a similar speed for serialization or deserialization (maybe a touch slower), but generally produces a slightly smaller "on the wire" size.
-
I made a blazing fast and small new data serialization format called "DLHN" in Rust.
You should add your crate to these benchmarks. (Which are, AFAIK, the most comprehensive set of benchmarks currently available for Rust serialization libraries.)