arrow2
DISCONTINUED
arrow-rs
Our great sponsors
arrow2 | arrow-rs | |
---|---|---|
25 | 16 | |
1,071 | 2,123 | |
- | 4.8% | |
0.0 | 9.8 | |
about 1 month ago | 3 days ago | |
Rust | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
arrow2
-
Polars: Company Formation Announcement
One of the interesting components of Polars that I've been watching is the use of the Apache Arrow memory format, which is a standard layout for data in memory that enables processing (querying, iterating, calculating, etc) in a language agnostic way, in particular without having to copy/convert it into the local object format first. This enables cross-language data access by mmaping or transferring a single buffer, with zero [de]serialization overhead.
For some history, there's has been a bit of contention between the official arrow-rs implementation and the arrow2 implementation created by the polars team which includes some extra features that they find important. I think the current status is that everyone agrees that having two crates that implement the same standard is not ideal, and they are working to port any necessary features to the arrow-rs crate and plan on eventually switching to it and deprecating arrow2. But that's not easy.
-
Data Engineering with Rust
https://github.com/jorgecarleitao/arrow2 https://github.com/apache/arrow-datafusion https://github.com/apache/arrow-ballista https://github.com/pola-rs/polars https://github.com/duckdb/duckdb
-
Polars[Query Engine/ DataFrame] 0.28.0 released :)
Currently datafusion and polars aren't directly operable iirc because they use different underlying arrows implementations, but there seems to be work being done on that here https://github.com/jorgecarleitao/arrow2/issues/1429
-
Rust is showing a lot of promise in the DataFrame / tabular data space
[arrow2](https://github.com/jorgecarleitao/arrow2) and [parquet2](https://github.com/jorgecarleitao/parquet2) are great foundational libraries for and DataFrame libs in Rust.
-
Matano - Open source security lake built with Arrow2 + Rust
[1] https://github.com/jorgecarleitao/arrow2
-
Polars 0.23.0 released
In lockstep with arrow2's 0.13 release, we have published polars 0.23.0.
-
::lending-iterator — Lending/streaming Iterators on Stable Rust (and a pinch of HKT)
This is so freaking life-saving! - we have been using StreamingIterator and FallibleStreamingIterator in libraries (arrow2 and parquet2) and the existing landscape is quite confusing for new users!
-
Polars 0.22 is released!
In lockstep with a new release of arrow2: https://github.com/jorgecarleitao/arrow2/releases/tag/v0.12.0
- Arrow2 0.12.0 released - including almost complete support for Parquet
- Anda para aqui alguém a brincar com Rust (linguagem)?
arrow-rs
-
Rkyv: Rkyv zero-copy deserialization framework for rust
https://github.com/djkoloski/rust_serialization_benchmark
Apache/arrow-rs: https://github.com/apache/arrow-rs
From https://arrow.apache.org/faq/ :
> How does Arrow relate to Flatbuffers?
> Flatbuffers is a low-level building block for binary data serialization. It is not adapted to the representation of large, structured, homogenous data, and does not sit at the right abstraction layer for data analysis tasks.
> Arrow is a data layer aimed directly at the needs of data analysis, providing a comprehensive collection of data types required to analytics, built-in support for “null” values (representing missing data), and an expanding toolbox of I/O and computing facilities.
> The Arrow file format does use Flatbuffers under the hood to serialize schemas and other metadata needed to implement the Arrow binary IPC protocol, but the Arrow data format uses its own representation for optimal access and computation
-
Polars: Company Formation Announcement
One of the interesting components of Polars that I've been watching is the use of the Apache Arrow memory format, which is a standard layout for data in memory that enables processing (querying, iterating, calculating, etc) in a language agnostic way, in particular without having to copy/convert it into the local object format first. This enables cross-language data access by mmaping or transferring a single buffer, with zero [de]serialization overhead.
For some history, there's has been a bit of contention between the official arrow-rs implementation and the arrow2 implementation created by the polars team which includes some extra features that they find important. I think the current status is that everyone agrees that having two crates that implement the same standard is not ideal, and they are working to port any necessary features to the arrow-rs crate and plan on eventually switching to it and deprecating arrow2. But that's not easy.
-
The state of Apache Avro in Rust
From what I've seen, most of the Rust community seems to be adopting Apache Arrow as the go-to for data processing. It has strong community support and good interoperability with many cross-language tools. It is natively a columnar format. If row-oriented is a must for your use case, consider looking into alternatives like gRPC that might better suit your needs.
-
Apache Arrow Feature Parity Timeline?
That matrix doesn't seem up to date. For example looking at the rust crate it does seem to support things like map, float16, and IPC. The changelog shows an impressive development pace.
-
Apache Arrow Flight SQL: Accelerating Database Access
Oh, and for anyone interested in pitching in on the Rust implementation, there's an issue logged here along with some discussion: https://github.com/apache/arrow-rs/issues/1323
-
Arrow2 0.9 has been released
Deeply nested parquet support would be nice. Even the official implementation lack this. https://github.com/apache/arrow-rs/issues/993
I'm still not sure how this differs from https://github.com/apache/arrow-rs. What does transmute even mean?
-
A SQL Database
Cool! You may want to take a look at the Apache Arrow, rust project and datafusion: https://github.com/apache/arrow-rs and https://github.com/apache/arrow-datafusion
-
Nushell 0.34 released - the first release with dataframe support
Congrats team and great work @elferherrera! Note that this backed by Polars and Arrow, and is as fast as it gets. :)
What are some alternatives?
polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
arrow-datafusion - Apache Arrow DataFusion SQL Query Engine
db-benchmark - reproducible benchmark of database-like ops
pyodide - Pyodide is a Python distribution for the browser and Node.js based on WebAssembly
explorer - Series (one-dimensional) and dataframes (two-dimensional) for fast and elegant data exploration in Elixir
datafuse - An elastic and reliable Cloud Warehouse, offers Blazing Fast Query and combines Elasticity, Simplicity, Low cost of the Cloud, built to make the Data Cloud easy [Moved to: https://github.com/datafuselabs/databend]
parquet2 - Fastest and safest Rust implementation of parquet. `unsafe` free. Integration-tested against pyarrow
explorer - An open source block explorer
byo-sql - An in-memory SQL database in Rust.