bitvec
rust_serialization_benchmark
Our great sponsors
bitvec | rust_serialization_benchmark | |
---|---|---|
17 | 22 | |
1,138 | 512 | |
1.5% | - | |
0.0 | 7.7 | |
10 days ago | 3 days ago | |
Rust | Rust | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bitvec
-
bitcode 0.4 release - binary serialization format
I was also under the false impression that bitwise encoding was slow. When I first implemented bitcode with bitvec I got performance 20x worse than bincode. After writing my own implementation I was able to get much better performance.
-
An optimized replacement of the infamous std::vector<🅱️ool>
interesting; i'll have to compare this to my rust counterpart. your numbers indicate some clever implementations i'd love to read
-
You need to stop idolizing programming languages.
Not to mention having a lackluster std which causes you to use nonstardard not so well documented crates and a 40K LoC library to do "bit-twiddling" (the lib, https://github.com/bitvecto-rs/bitvec the blog that says "twiddle bits" https://blog.adamchalmers.com/making-a-dns-client/ and for crying out loud the blogger also used the language the author mentioned and I quote "ergonomics AND speed AND correctness")
- bit-twiddling tricks. It's the perfect example of Rust's no-compromises "ergonomics AND speed AND correctness" ideals
-
An Armful of CHERIs: Memory Safety in the processor. Do we still need safe languages with CHERI?
https://github.com/bitvecto-rs/bitvec/issues/135 is a very funny read about how to perform inttoptr with provenance retention
-
bitvec 1.0.0 Released
Technically #135 gives me license to yank affected crates, but since the only exploit is "Miri crashes exactly one test out of the suite" it's not really worth it to be a stickler. Call it a truce
-
What are some creative/advanced uses of macro_rules?
My friend Nika wrote a macro that packs a sequence of 1, 0, … tokens into a correctly structured bit-buffer, adaptable over any register type or bit-ordering, at compile time. It's now basically this whole file
-
Where do I document a published crate?
if you are interested in a user manual, you can use mdbook as well. for an example, my bitvec project uses mdbook (book.toml) and a github action (.github/workflows/gh-pages.yml) to compile the guide and host it as a github pages website. it's slightly more complicated, and i'd like docs.rs to follow hexdoc.pm's example of hosting both api docs and prose, but until then this is a pretty reasonable solution.
-
Idiomatic Way to Validate Struct Field Values
the first one
-
When and how to use traits?
i would browse the standard library, tower, nom, or my own bitvec to see layout and trait/record separation. in particular, std::io and std::net may be of use: io::Read and io::Write are pervasive examples of implementing unixy file-descriptor-like behavior in the type system
rust_serialization_benchmark
-
Rkyv: Rkyv zero-copy deserialization framework for rust
https://github.com/djkoloski/rust_serialization_benchmark
Apache/arrow-rs: https://github.com/apache/arrow-rs
From https://arrow.apache.org/faq/ :
> How does Arrow relate to Flatbuffers?
> Flatbuffers is a low-level building block for binary data serialization. It is not adapted to the representation of large, structured, homogenous data, and does not sit at the right abstraction layer for data analysis tasks.
> Arrow is a data layer aimed directly at the needs of data analysis, providing a comprehensive collection of data types required to analytics, built-in support for “null” values (representing missing data), and an expanding toolbox of I/O and computing facilities.
> The Arrow file format does use Flatbuffers under the hood to serialize schemas and other metadata needed to implement the Arrow binary IPC protocol, but the Arrow data format uses its own representation for optimal access and computation
-
Comfy Engine 0.3 - No Lifetimes, User Shaders, Text Rendering, 2.5D, LDTK
Nice that comfy gets even easier. Also, if serde's compile time is an issue, then there's nanoserde which is usually much much faster according to benchmarks
-
Müsli - An experimental binary serialization framework with more choice
A note on performance and size: Some benchmarks and statistics are included in the README. But only because people will be curious. I've based my methodology on rust_serialization_benchmark, but decided to not extend it (for now) since it seems to exclude any Rust types which are not widely supported by all formats being tested (like HashMap's and 128-bit numbers). The test suite is already quite nice if you want to take it for a spin.
-
bitcode 0.4 release - binary serialization format
While we haven't benchmarked either of those ourselves. You can checkout rust_serialization_benchmark which has protobuf under the name prost.
-
Announcing bitcode format for serde
Update: Benchmark PR submitted: https://github.com/djkoloski/rust_serialization_benchmark/pull/37
-
Best format for high-performance Serde?
Here is a speed and size benchmark of different rust binary serialization formats: https://github.com/djkoloski/rust_serialization_benchmark Warning: I think the creator of this benchmark is also the creator of rkyv, one of the best positioned formats in the benchmark.
-
Grammatical, automatic furigana with SQLite and Rust
So I assume you're deserializing them before processing the book? If so then if you want an easy speed-up you could also take a look at these benchmarks and pick a faster serialization crate. (: (Although you might or might not get a big speedup; depends on what exactly you're deserializing and how much you are deserializing.)
-
GitHub - epage/parse-benchmarks-rs
You can add the rust serialization benchmark to that list
-
The run-up to v1.0 for Postcard
Hey! Similar to bincode, it provides a very similar, compact binary format. The rkyv benchmark is the most comprehensive I'm aware of, but compared to bincode, postcard is generally a similar speed for serialization or deserialization (maybe a touch slower), but generally produces a slightly smaller "on the wire" size.
-
I made a blazing fast and small new data serialization format called "DLHN" in Rust.
You should add your crate to these benchmarks. (Which are, AFAIK, the most comprehensive set of benchmarks currently available for Rust serialization libraries.)
What are some alternatives?
nom - Rust parser combinator framework
json-benchmark - nativejson-benchmark in Rust
rfcs - RFCs for changes to Rust
rust-serialization-benchmarks
time - The most used Rust library for date and time handling.
bebop - 🎷No ceremony, just code. Blazing fast, typesafe binary serialization.
byteorder - Rust library for reading/writing numbers in big-endian and little-endian.
unsafe-code-guidelines - Forum for discussion about what unsafe code can and can't do
tower - async fn(Request) -> Result<Response, Error>
dlhn - DLHN implementation for Rust
hardcaml - Hardcaml is an OCaml library for designing hardware.
bincode - A binary encoder / decoder implementation in Rust.