Our great sponsors
-
simdjson
Parsing gigabytes of JSON per second : used by Facebook/Meta Velox, the Node.js runtime, ClickHouse, WatermelonDB, Apache Doris, Milvus, StarRocks
Hey folks, if you've been in IRC this last week you've probably heard me mumbling to myself, or crying out in agony trying to bind Raku to simdjson. However, the pain was not in vain!
-
You can now parse JSON (a little bit faster mileage may vary) than JSON::Fast, using: https://github.com/rawleyfowler/JSON-Simd
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
You don't mention your benchmark data or process. Are your results dominated by overhead, not the conversion of JSON by either solution? If your benchmark processes 1MB of JSON, have you tried 1GB? 10GB? 100GB? Have you tried more rigorous benchmarking (at an extreme, using krun, though I'd defer getting into that until after you've exhausted other factors confounding a reliable benchmark)?
-
the database example here includes a basic HashMap accessor (with Rust as the provider)