mack
delta-rs
Our great sponsors
mack | delta-rs | |
---|---|---|
5 | 28 | |
269 | 1,820 | |
- | 6.1% | |
5.9 | 9.7 | |
3 months ago | 5 days ago | |
Python | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mack
-
Implementing and using SCD Type 2
There still library form databricks? But I have never used it: https://github.com/MrPowers/mack
-
Spark/databricks seems amazing?
I was a Databricks user for 5 years and spent almost all my time inside the IntelliJ IDE developing code. I wrote almost all code in a text editor, unit tested all code (actually authored the popular Scala Spark / PySpark testing libraries: https://github.com/MrPowers/) and had everything up with CI/CD. Lots of OSS PySpark/Scala Spark work too. I only used Databricks notebooks for data exploration and for lightweight notebooks that would invoke functions (that were defined in Python Wheel / JAR files). I am on the Delta Lake team at Databricks now and still do all my work in text editors (see this project: https://github.com/MrPowers/mack) and create lots of examples in Jupyter Notebooks. So I definitely think it's possible to limit notebook exposure.
-
PySpark OSS Contribution Opportunity
Great, would love your help. You can also check out the mack project if you'd like to work on a Delta Lake + PySpark project: https://github.com/MrPowers/mack/issues
-
Spark open source community is awesome
a couple devs just added a `find_compositite_keys_candidates` function so users can easily identify columns that could be used as a unique identifier in their Delta table.
-
How to append data to Delta tables without adding any duplicates
Fair points. Here's the code repo: https://github.com/MrPowers/mack
delta-rs
- Delta-rs – a Rust-based implementation of deltalake
-
Delta Lake vs. Parquet: A Comparison
I work at Databricks, but am pretty must just an OSS nerd, mainly focusing on Delta Rust recently: https://github.com/delta-io/delta-rs
I did some keyword research and wrote this post cause lots of folks are doing searches for Delta Lake vs Parquet. I'm just trying to share a fair summary of the tradeoffs with folks who are doing this search. It's a popular post and that's why I figured I would share it here.
-
Working with Rust
Seeing a lot of great libraries coming out with python bindings in the data world e.g delta-rs Polars. I see it growing in this space as a C++ alternative
-
Ideas/Suggestions around setting up a data pipeline from scratch
If I’m not misunderstanding, you could both decode the gRPC protobuf AND write to delta lake in Rust. Tonic, Delta-rs.
-
Delta-rs with upserts
https://github.com/delta-io/delta-rs/issues/850 … looks like it’s on the roadmap!
-
Read and filter delta files on Azure from a .net application
Microsoft talk a lot about OneLake and that the delta file format will be the standard during the build conference. Is it only me that find it strange that their marketing team talks so much about the delta format when they do not even provide a library to work with the delta format from .net? It would be easy for them to maintain bindings to https://github.com/delta-io/delta-rs but also provide a reader that support V-Order https://learn.microsoft.com/en-us/fabric/data-engineering/delta-optimization-and-v-order?tabs=sparksql
-
Polars query engine 0.29.0 released
I know someone will be adding this on the python side in the coming weeks. On the rust side you can use delta-rs with polars. Though you would be compiling both arrow2 and arrow-rs, so that's quite heavy.
-
Delta Lake without Databricks?
You don’t need DBX to use Delta Lake. You can use S3 as the backend and just use the Python Delta Lake library. It works great! https://github.com/delta-io/delta-rs
-
Seeking Recommendations for a Master Data Management Tool
Maybe if I get some free time soon I can formalize into a working example. Been wanting an excuse to try similar concept in delta-rs and polars/duckdb vs databricks/spark vs iceberg/polars.
-
Opportunity to contribute to a popular Rust data project (delta-rs)
delta-rs is a native Rust library for Delta Lake. It's a better way to store data than Parquet files and is fundamentally important library for the Rust data ecosystem. It's tightly integrated with Polars and Datafusion and there is a lot of interesting Rust work to be done.
What are some alternatives?
chispa - PySpark test helper methods with beautiful error messages
delta - An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs
os-lib - OS-Lib is a simple, flexible, high-performance Scala interface to common OS filesystem and subprocess APIs
roapi - Create full-fledged APIs for slowly moving datasets without writing a single line of code.
jodie - Delta lake and filesystem helper methods
materialize - The data warehouse for operational workloads.
ballista - Distributed compute platform implemented in Rust, and powered by Apache Arrow.
kafka-delta-ingest - A highly efficient daemon for streaming data from Kafka into Delta Lake
delta-oss
polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust
databricks-cli - The missing command line client for Databricks SQL
dipa - dipa makes it easy to efficiently delta encode large Rust data structures.