encoding
datafusion-ballista
encoding | datafusion-ballista | |
---|---|---|
8 | 12 | |
964 | 1,288 | |
0.7% | 4.6% | |
3.6 | 8.2 | |
5 months ago | 5 days ago | |
Go | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
encoding
- Handling high-traffic HTTP requests with JSON payloads
-
Rust vs. Go in 2023
https://github.com/BurntSushi/rebar#summary-of-search-time-b...
Further, Go refusing to have macros means that many libraries use reflection instead, which often makes those parts of the Go program perform no better than Python and in some cases worse. Rust can just generate all of that at compile time with macros, and optimize them with LLVM like any other code. Some Go libraries go to enormous lengths to reduce reflection overhead, but that's hard to justify for most things, and hard to maintain even once done. The legendary https://github.com/segmentio/encoding seems to be abandoned now and progress on Go JSON in general seems to have died with https://github.com/go-json-experiment/json .
Many people claiming their projects are IO-bound are just assuming that's the case because most of the time is spent in their input reader. If they actually measured they'd see it's not even saturating a 100Mbps link, let alone 1-100Gbps, so by definition it is not IO-bound. Even if they didn't need more throughput than that, they still could have put those cycles to better use or at worst saved energy. Isn't that what people like to say about Go vs Python, that Go saves energy? Sure, but it still burns a lot more energy than it would if it had macros.
Rust can use state-of-the-art memory allocators like mimalloc, while Go is still stuck on an old fork of tcmalloc, and not just tcmalloc in its original C, but transpiled to Go so it optimizes much less than LLVM would optimize it. (Many people benchmarking them forget to even try substitute allocators in Rust, so they're actually underestimating just how much faster Rust is)
Finally, even Go Generics have failed to improve performance, and in many cases can make it unimaginably worse through -- I kid you not -- global lock contention hidden behind innocent type assertion syntax: https://planetscale.com/blog/generics-can-make-your-go-code-...
It's not even close. There are many reasons Go is a lot slower than Rust and many of them are likely to remain forever. Most of them have not seen meaningful progress in a decade or more. The GC has improved, which is great, but that's not even a factor on the Rust side.
-
Quickly checking that a string belongs to a small set
We took a similar approach in our JSON decoder. We needed to support sets (JSON object keys) that aren't necessarily known until runtime, and strings that are up to 16 bytes in length.
We got better performance with a linear scan and SIMD matching than with a hash table or a perfect hashing scheme.
See https://github.com/segmentio/asm/pull/57 (AMD64) and https://github.com/segmentio/asm/pull/65 (ARM64). Here's how it's used in the JSON decoder: https://github.com/segmentio/encoding/pull/101
-
80x improvements in caching by moving from JSON to gob
Binary formats work well for some cases but JSON is often unavoidable since it is so widely used for APIs. However, you can make it faster in golang with this https://github.com/segmentio/encoding.
-
Speeding up Go's builtin JSON encoder up to 55% for large arrays of objects
Would love to see results from incorporating https://github.com/segmentio/encoding/tree/master/json!
-
Fastest JSON parser for large (~888kB) API response?
Try this one out https://github.com/segmentio/encoding it's always worked well for me
-
📖 Go Fiber by Examples: Delving into built-in functions
Converts any interface or string to JSON using the segmentio/encoding package. Also, the JSON method sets the content header to application/json.
-
In-memory caching solutions
If you're interested in super fast & easy JSON for that cache give this a try I've used it in prod & never had a problem.
datafusion-ballista
-
Polars
Not super on topic because this is all immature and not integrated with one another yet, but there is a scaled-out rust data-frames-on-arrow implementation called ballista that could maybe? form the backend of a polars scale out approach: https://github.com/apache/arrow-ballista
-
Rust vs. Go in 2023
> Is Rust's compile-time GC about something other than performance somehow?
AFAIK, memory safety and language features as RAII is also available in C++, for instance. About the reasons for slow compilation, take a look at https://www.reddit.com/r/rust/comments/xna9mb/why_are_rust_p...
Not having a GC is also about not having a runtime as you mention (e.g. nice for creating Python extensions and embedded systems programming) and also more runtime deterministic performance: on that, if I'm not mistaken that was the reason for Discourse switching to Rust and also, e.g.: "the choice of Rust as the main execution language avoids the overhead of GC pauses and results in deterministic processing times" https://github.com/apache/arrow-ballista/blob/main/README.md
- Ballista (Rust) vs Apache Spark. A Tale of Woe.
-
Evolution and Trends of Data Engineering 2022/23
Ballista (Arrow-Rust), which is largely inspired by Apache Spark, there are some interesting differences.
-
Data Engineering with Rust
https://github.com/jorgecarleitao/arrow2 https://github.com/apache/arrow-datafusion https://github.com/apache/arrow-ballista https://github.com/pola-rs/polars https://github.com/duckdb/duckdb
- Any job processing framework like Spark but in Rust?
-
Is Apache Arrow DataFusion and Ballista the future of big data engineering/science?
Source: https://github.com/apache/arrow-ballista
-
Pure Python Distributed SQL Engine
Can you explain how this might differ from something like https://github.com/apache/arrow-ballista
I've seen several variants of "next-gen" spark, but nowhere have I really seen the different tradeoffs/advantages/disadvantages between them.
- Scala or Rust? which one will rule in future?
-
Welcome to Comprehensive Rust
Rust has amazing integration with Python through PyO3 [1] so see it like a safe alternative for high performance calculations. The ecosystem itself is starting to come together exciting projects like Polars [2] (Pandas alternative), nalgebra [3], Datafusion [4] and Ballista [5]
[1] https://github.com/PyO3/pyo3
[2] https://github.com/pola-rs/polars/
[3] https://docs.rs/nalgebra/latest/nalgebra/
[4] https://github.com/apache/arrow-datafusion
[5] https://github.com/apache/arrow-ballista
What are some alternatives?
sonic - A blazingly fast JSON serializing & deserializing library
duckdb - DuckDB is an in-process SQL OLAP Database Management System
groupcache - Clone of golang/groupcache with TTL and Item Removal support
lance - Modern columnar data format for ML and LLMs implemented in Rust. Convert from parquet in 2 lines of code for 100x faster random access, vector index, and data versioning. Compatible with Pandas, DuckDB, Polars, Pyarrow, with more integrations coming..
parquet-go - Go library to read/write Parquet files
seafowl - Analytical database for data-driven Web applications 🪶
base64 - Faster base64 encoding for Go
connector-x - Fastest library to load data from DB to DataFrames in Rust and Python
buntdb - BuntDB is an embeddable, in-memory key/value database for Go with custom indexing and geospatial support
opteryx - 🦖 A SQL-on-everything Query Engine you can execute over multiple databases and file formats. Query your data, where it lives.
hilbert - Go package for mapping values to and from space-filling curves, such as Hilbert and Peano curves.
sqlglot - Python SQL Parser and Transpiler