parquet2
kafka-delta-ingest
parquet2 | kafka-delta-ingest | |
---|---|---|
6 | 6 | |
347 | 325 | |
- | 4.0% | |
3.2 | 7.4 | |
8 months ago | 20 days ago | |
Rust | Rust | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
parquet2
-
Rust is showing a lot of promise in the DataFrame / tabular data space
[arrow2](https://github.com/jorgecarleitao/arrow2) and [parquet2](https://github.com/jorgecarleitao/parquet2) are great foundational libraries for and DataFrame libs in Rust.
-
::lending-iterator — Lending/streaming Iterators on Stable Rust (and a pinch of HKT)
This is so freaking life-saving! - we have been using StreamingIterator and FallibleStreamingIterator in libraries (arrow2 and parquet2) and the existing landscape is quite confusing for new users!
- Anda para aqui alguém a brincar com Rust (linguagem)?
-
Parquet2 0.9 released (and a request for feedback)
Thanks a lot for your feedback. Based on it I am proposing the following change: https://github.com/jorgecarleitao/parquet2/pull/78
-
parquet2 0.3.0, with native support to read async
release on github.
kafka-delta-ingest
-
Using rust for DE activities?
Rust can offer incredible cost savings when you can use it in place of spark to interact with your delta lake. One such project was kafka-delta-ingest. The developers were able to reduce the cost of running the pipeline by over 90%. However, most of this stuff is still very experimental and not ready for production but you will definitely be seeing more projects like this just based on how much money can be saved.
-
Which lakehouse table format do you expect your organization will be using by the end of 2023?
This independence from a catalog allows for path based reads and writes. This is handy when writing from Kafka directly to Delta Lake for the first layer of ingestion. You don’t need a catalog (or even Spark). https://github.com/delta-io/kafka-delta-ingest/tree/main/src
-
Streaming Data and Postgres
As far as I know no. You certainly could use events on a streaming ledger like Kafka or Redpanda and then store to delta with https://github.com/delta-io/kafka-delta-ingest and process them with all the gis goodness of spark. However, this is fairly complicated and much different from a simple postgis drop in replacement. There are specialized meaning faster and more efficient systems out there for specialized tasks such as geo fencing in real-time
-
Rust is showing a lot of promise in the DataFrame / tabular data space
kafka-delta-ingest is a good project to get streaming data into a Delta Lake. Here's a great talk on the topic.
-
process millions of events per sec
What about https://github.com/delta-io/kafka-delta-ingest?
- Exactly once delivery from Kafka to Delta Lake with Rust
What are some alternatives?
parquet-format-rs - Apache Parquet format for Rust, hosting the Thrift definition file and the generated .rs file
delta-rs - A native Rust library for Delta Lake, with bindings into Python
rust-brotli - Brotli compressor and decompressor written in rust that optionally avoids the stdlib
dipa - dipa makes it easy to efficiently delta encode large Rust data structures.
roapi - Create full-fledged APIs for slowly moving datasets without writing a single line of code.
kafka-rust - Rust client for Apache Kafka
arrow2 - Transmute-free Rust library to work with the Arrow format
rust-rdkafka - A fully asynchronous, futures-based Kafka client library for Rust based on librdkafka
inkwell - It's a New Kind of Wrapper for Exposing LLVM (Safely)
flowgger - A fast data collector in Rust
pqrs - Command line tool for inspecting Parquet files