turbodbc
arrow2
turbodbc | arrow2 | |
---|---|---|
2 | 25 | |
603 | 1,071 | |
0.0% | - | |
8.0 | 0.0 | |
3 days ago | 3 months ago | |
C++ | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
turbodbc
-
Arrowdantic 0.1.0 released
It supports reading from and writing to ODBC compliant databases at likely similar performance as turbodbc and it does not require conda to install.
-
arrow-odbc: Fetch arrow arrays from an ODBC data source in a pip installable environment
turbodbc is great, but a pain to build, at least without conda. arrow-odbc-py uses cffi (rather than PyO3) to talk to a rust backend and than uses the Arrow C Data interface to provide the user with pyarrow compatible arrow arrays. The use of a dedicated C interface in both places, avoids linking directly against the Python C-Interpreter as well as the specific C++ Arrow libraries your pyarrow version depends on. Avoiding some pain of dependency hell.
arrow2
-
Polars: Company Formation Announcement
One of the interesting components of Polars that I've been watching is the use of the Apache Arrow memory format, which is a standard layout for data in memory that enables processing (querying, iterating, calculating, etc) in a language agnostic way, in particular without having to copy/convert it into the local object format first. This enables cross-language data access by mmaping or transferring a single buffer, with zero [de]serialization overhead.
For some history, there's has been a bit of contention between the official arrow-rs implementation and the arrow2 implementation created by the polars team which includes some extra features that they find important. I think the current status is that everyone agrees that having two crates that implement the same standard is not ideal, and they are working to port any necessary features to the arrow-rs crate and plan on eventually switching to it and deprecating arrow2. But that's not easy.
https://github.com/apache/arrow-rs/issues/1176
https://github.com/jorgecarleitao/arrow2/pull/1476
-
Data Engineering with Rust
https://github.com/jorgecarleitao/arrow2 https://github.com/apache/arrow-datafusion https://github.com/apache/arrow-ballista https://github.com/pola-rs/polars https://github.com/duckdb/duckdb
-
Polars[Query Engine/ DataFrame] 0.28.0 released :)
Currently datafusion and polars aren't directly operable iirc because they use different underlying arrows implementations, but there seems to be work being done on that here https://github.com/jorgecarleitao/arrow2/issues/1429
- Arrow2 0.15 has been released. Happy festivities everyone =)
-
Rust is showing a lot of promise in the DataFrame / tabular data space
[arrow2](https://github.com/jorgecarleitao/arrow2) and [parquet2](https://github.com/jorgecarleitao/parquet2) are great foundational libraries for and DataFrame libs in Rust.
-
Matano - Open source security lake built with Arrow2 + Rust
[1] https://github.com/jorgecarleitao/arrow2
-
Polars 0.23.0 released
In lockstep with arrow2's 0.13 release, we have published polars 0.23.0.
- Arrow2 v0.13.0, now with support to read Apache ORC and COW semantics!
-
::lending-iterator — Lending/streaming Iterators on Stable Rust (and a pinch of HKT)
This is so freaking life-saving! - we have been using StreamingIterator and FallibleStreamingIterator in libraries (arrow2 and parquet2) and the existing landscape is quite confusing for new users!
-
Mssql :(
arrow2 has support for mssql via ODBC (which microsoft has first class support to). Here are the integration tests we have (both read and write) against mssql specifically.
What are some alternatives?
arrow-odbc-py - Read Apache Arrow batches from ODBC data sources in Python
polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust
soci - Official repository of the SOCI - The C++ Database Access Library
datafusion - Apache DataFusion SQL Query Engine
vinum - Vinum is a SQL processor for Python, designed for data analysis workflows and in-memory analytics.
db-benchmark - reproducible benchmark of database-like ops
pyexasol - Exasol Python driver with low overhead, fast HTTP transport and compression
arrow-rs - Official Rust implementation of Apache Arrow
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
pyodide - Pyodide is a Python distribution for the browser and Node.js based on WebAssembly
odbc - Connect to ODBC databases (using the DBI interface)
explorer - Series (one-dimensional) and dataframes (two-dimensional) for fast and elegant data exploration in Elixir