kafka-delta-ingest
arrow2
kafka-delta-ingest | arrow2 | |
---|---|---|
6 | 25 | |
325 | 1,071 | |
3.4% | - | |
7.4 | 0.0 | |
18 days ago | 3 months ago | |
Rust | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kafka-delta-ingest
-
Using rust for DE activities?
Rust can offer incredible cost savings when you can use it in place of spark to interact with your delta lake. One such project was kafka-delta-ingest. The developers were able to reduce the cost of running the pipeline by over 90%. However, most of this stuff is still very experimental and not ready for production but you will definitely be seeing more projects like this just based on how much money can be saved.
-
Which lakehouse table format do you expect your organization will be using by the end of 2023?
This independence from a catalog allows for path based reads and writes. This is handy when writing from Kafka directly to Delta Lake for the first layer of ingestion. You don’t need a catalog (or even Spark). https://github.com/delta-io/kafka-delta-ingest/tree/main/src
-
Streaming Data and Postgres
As far as I know no. You certainly could use events on a streaming ledger like Kafka or Redpanda and then store to delta with https://github.com/delta-io/kafka-delta-ingest and process them with all the gis goodness of spark. However, this is fairly complicated and much different from a simple postgis drop in replacement. There are specialized meaning faster and more efficient systems out there for specialized tasks such as geo fencing in real-time
-
Rust is showing a lot of promise in the DataFrame / tabular data space
kafka-delta-ingest is a good project to get streaming data into a Delta Lake. Here's a great talk on the topic.
-
process millions of events per sec
What about https://github.com/delta-io/kafka-delta-ingest?
- Exactly once delivery from Kafka to Delta Lake with Rust
arrow2
-
Polars: Company Formation Announcement
One of the interesting components of Polars that I've been watching is the use of the Apache Arrow memory format, which is a standard layout for data in memory that enables processing (querying, iterating, calculating, etc) in a language agnostic way, in particular without having to copy/convert it into the local object format first. This enables cross-language data access by mmaping or transferring a single buffer, with zero [de]serialization overhead.
For some history, there's has been a bit of contention between the official arrow-rs implementation and the arrow2 implementation created by the polars team which includes some extra features that they find important. I think the current status is that everyone agrees that having two crates that implement the same standard is not ideal, and they are working to port any necessary features to the arrow-rs crate and plan on eventually switching to it and deprecating arrow2. But that's not easy.
https://github.com/apache/arrow-rs/issues/1176
https://github.com/jorgecarleitao/arrow2/pull/1476
-
Data Engineering with Rust
https://github.com/jorgecarleitao/arrow2 https://github.com/apache/arrow-datafusion https://github.com/apache/arrow-ballista https://github.com/pola-rs/polars https://github.com/duckdb/duckdb
-
Polars[Query Engine/ DataFrame] 0.28.0 released :)
Currently datafusion and polars aren't directly operable iirc because they use different underlying arrows implementations, but there seems to be work being done on that here https://github.com/jorgecarleitao/arrow2/issues/1429
- Arrow2 0.15 has been released. Happy festivities everyone =)
-
Rust is showing a lot of promise in the DataFrame / tabular data space
[arrow2](https://github.com/jorgecarleitao/arrow2) and [parquet2](https://github.com/jorgecarleitao/parquet2) are great foundational libraries for and DataFrame libs in Rust.
-
Matano - Open source security lake built with Arrow2 + Rust
[1] https://github.com/jorgecarleitao/arrow2
-
Polars 0.23.0 released
In lockstep with arrow2's 0.13 release, we have published polars 0.23.0.
- Arrow2 v0.13.0, now with support to read Apache ORC and COW semantics!
-
::lending-iterator — Lending/streaming Iterators on Stable Rust (and a pinch of HKT)
This is so freaking life-saving! - we have been using StreamingIterator and FallibleStreamingIterator in libraries (arrow2 and parquet2) and the existing landscape is quite confusing for new users!
-
Mssql :(
arrow2 has support for mssql via ODBC (which microsoft has first class support to). Here are the integration tests we have (both read and write) against mssql specifically.
What are some alternatives?
delta-rs - A native Rust library for Delta Lake, with bindings into Python
polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust
dipa - dipa makes it easy to efficiently delta encode large Rust data structures.
datafusion - Apache DataFusion SQL Query Engine
kafka-rust - Rust client for Apache Kafka
db-benchmark - reproducible benchmark of database-like ops
rust-rdkafka - A fully asynchronous, futures-based Kafka client library for Rust based on librdkafka
arrow-rs - Official Rust implementation of Apache Arrow
flowgger - A fast data collector in Rust
pyodide - Pyodide is a Python distribution for the browser and Node.js based on WebAssembly
delta - A syntax-highlighting pager for git, diff, and grep output
explorer - Series (one-dimensional) and dataframes (two-dimensional) for fast and elegant data exploration in Elixir