json-benchmark VS rust_serialization_benchmark

Compare json-benchmark vs rust_serialization_benchmark and see what are their differences.

rust_serialization_benchmark

Benchmarks for rust serialization frameworks (by djkoloski)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
json-benchmark rust_serialization_benchmark
12 22
169 512
4.7% -
4.8 7.7
about 1 month ago 1 day ago
C++ Rust
Apache License 2.0 -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

json-benchmark

Posts with mentions or reviews of json-benchmark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-22.
  • Do You Know How Much Your Computer Can Do in a Second?
    2 projects | news.ycombinator.com | 22 Jun 2023
    I don’t really understand what this is trying to prove:

    - you don’t seem to specify the size of the input. This is the most important omission

    - you are constructing an optimised representation (in this case, strict with fields in the right places) instead of a generic ‘dumb’ representation that is more like a tree of python dicts

    - rust is not a ‘moderately fast language’ imo (though this is not a very important point. It’s more about how optimised the parser is, and I suspect that serde_json is written in an optimised way, but I didn’t look very hard).

    I found[1], which gives serde_json to a dom 300-400MB/s on a somewhat old laptop cpu. A simpler implementation runs at 100-200, a very optimised implementation gets 400-800. But I don’t think this does that much to confirm what I said in the comment you replied to. The numbers for simd json are a bit lower than I expected (maybe due to the ‘dom’ part). I think my 50MB/a number was probably a bit off but maybe the python implementation converts json to some C object and then converts that C object to python objects. That might half your throughput (my guess is that this is what the ‘strict parse’ case for rustc_serialise is roughly doing).

    [1] https://github.com/serde-rs/json-benchmark

  • Serde Json vs Rapidjson (Rust vs C++)
    6 projects | /r/rust | 17 Jan 2023
    But the code OP posted deserializes JSON without knowing anything about the structure, which is known to be slow in serde-json and doesn't appear to be the focus for the library. The json and json-deserializer crates should perform much better in that scenario.
  • Good example of high performance Rust project without unsafe code?
    20 projects | /r/rust | 2 Aug 2022
  • I'm a veteran C++ programmer, what can Rust offer me?
    2 projects | /r/rust | 24 Mar 2022
  • Rust is just as fast as C/C++
    6 projects | /r/rust | 23 Feb 2022
    Of course that doesnt mean that in practice the available libraries are as optimized. Did you try actix? It tends to be faster than rocket. Also json-rust and simd-json are usually faster than serde-json, when you don't deserialize a known structure. Here are some benchmarks: https://github.com/serde-rs/json-benchmark
  • Lightweight template-based parser build system. Simple prototyping. Comfortable debugging. Effective developing.
    1 project | /r/dartlang | 23 Jan 2022
    The data for the test is taken from here: https://github.com/serde-rs/json-benchmark/tree/master/data
  • Performance of serde js value conversion and reference types
    1 project | /r/rust | 10 Nov 2021
    Here are some benchmarks https://github.com/serde-rs/json-benchmark
  • Serde zero-copy benchmarks?
    2 projects | /r/rust | 1 Apr 2021
    I found two projects: * https://github.com/djkoloski/rust_serialization_benchmark - doesn't use Serde zero copy * https://github.com/serde-rs/json-benchmark - has copy vs borrowed, but the results were the same for both, so something's off there
  • Android Developers Have A Tough Life
    1 project | /r/ProgrammerAnimemes | 3 Mar 2021
    Rust has a good enough standard library (I’d say comparable to C++), that you don’t really need packages for a lot of stuff. Most of my projects have 1 or 2 dependencies. Most of the time I am pulling in a JS parser (serde) and a parallelization library (rayon). These are both high performance libraries that make writing very fast (serde can handle 850 MB/s on a 5 year old laptop cpu per their benchmarks). Rayon is one of the best parallelism libraries I’ve worked with.

rust_serialization_benchmark

Posts with mentions or reviews of rust_serialization_benchmark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-13.
  • Rkyv: Rkyv zero-copy deserialization framework for rust
    2 projects | news.ycombinator.com | 13 Jan 2024
    https://github.com/djkoloski/rust_serialization_benchmark

    Apache/arrow-rs: https://github.com/apache/arrow-rs

    From https://arrow.apache.org/faq/ :

    > How does Arrow relate to Flatbuffers?

    > Flatbuffers is a low-level building block for binary data serialization. It is not adapted to the representation of large, structured, homogenous data, and does not sit at the right abstraction layer for data analysis tasks.

    > Arrow is a data layer aimed directly at the needs of data analysis, providing a comprehensive collection of data types required to analytics, built-in support for “null” values (representing missing data), and an expanding toolbox of I/O and computing facilities.

    > The Arrow file format does use Flatbuffers under the hood to serialize schemas and other metadata needed to implement the Arrow binary IPC protocol, but the Arrow data format uses its own representation for optimal access and computation

  • Comfy Engine 0.3 - No Lifetimes, User Shaders, Text Rendering, 2.5D, LDTK
    1 project | /r/rust | 9 Dec 2023
    Nice that comfy gets even easier. Also, if serde's compile time is an issue, then there's nanoserde which is usually much much faster according to benchmarks
  • Müsli - An experimental binary serialization framework with more choice
    7 projects | /r/rust | 18 May 2023
    A note on performance and size: Some benchmarks and statistics are included in the README. But only because people will be curious. I've based my methodology on rust_serialization_benchmark, but decided to not extend it (for now) since it seems to exclude any Rust types which are not widely supported by all formats being tested (like HashMap's and 128-bit numbers). The test suite is already quite nice if you want to take it for a spin.
  • bitcode 0.4 release - binary serialization format
    6 projects | /r/rust | 14 May 2023
    While we haven't benchmarked either of those ourselves. You can checkout rust_serialization_benchmark which has protobuf under the name prost.
  • Announcing bitcode format for serde
    4 projects | /r/rust | 16 Apr 2023
    Update: Benchmark PR submitted: https://github.com/djkoloski/rust_serialization_benchmark/pull/37
  • Best format for high-performance Serde?
    4 projects | /r/rust | 27 Mar 2023
    Here is a speed and size benchmark of different rust binary serialization formats: https://github.com/djkoloski/rust_serialization_benchmark Warning: I think the creator of this benchmark is also the creator of rkyv, one of the best positioned formats in the benchmark.
  • Grammatical, automatic furigana with SQLite and Rust
    1 project | /r/rust | 2 Feb 2023
    So I assume you're deserializing them before processing the book? If so then if you want an easy speed-up you could also take a look at these benchmarks and pick a faster serialization crate. (: (Although you might or might not get a big speedup; depends on what exactly you're deserializing and how much you are deserializing.)
  • GitHub - epage/parse-benchmarks-rs
    7 projects | /r/rust | 18 Jul 2022
    You can add the rust serialization benchmark to that list
  • The run-up to v1.0 for Postcard
    1 project | /r/rust | 10 May 2022
    Hey! Similar to bincode, it provides a very similar, compact binary format. The rkyv benchmark is the most comprehensive I'm aware of, but compared to bincode, postcard is generally a similar speed for serialization or deserialization (maybe a touch slower), but generally produces a slightly smaller "on the wire" size.
  • I made a blazing fast and small new data serialization format called "DLHN" in Rust.
    4 projects | /r/rust | 9 May 2022
    You should add your crate to these benchmarks. (Which are, AFAIK, the most comprehensive set of benchmarks currently available for Rust serialization libraries.)

What are some alternatives?

When comparing json-benchmark and rust_serialization_benchmark you can also consider the following projects:

hjson-rust for serde - Hjson for Rust

rust-serialization-benchmarks

simd-json - Rust port of simdjson

bebop - 🎷No ceremony, just code. Blazing fast, typesafe binary serialization.

hyperjson - 🐍 A hyper-fast Python module for reading/writing JSON data using Rust's serde-json.

unsafe-code-guidelines - Forum for discussion about what unsafe code can and can't do

MessagePack - MessagePack serializer implementation for Java / msgpack.org[Java]

dlhn - DLHN implementation for Rust

json - Strongly typed JSON library for Rust

bincode - A binary encoder / decoder implementation in Rust.

safety-dance - Auditing crates for unsafe code which can be safely replaced

rkyv - Zero-copy deserialization framework for Rust