roapi
delta-rs
Our great sponsors
roapi | delta-rs | |
---|---|---|
24 | 28 | |
3,070 | 1,820 | |
1.7% | 6.1% | |
6.9 | 9.7 | |
about 1 month ago | about 23 hours ago | |
Rust | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
roapi
- Full-fledged APIs for slowly moving datasets without writing code
-
Tuql: Automatically create a GraphQL server from a SQLite database
If your use case is read-only I suggest taking a look at roapi[1]. It supports multiple read frontends (GraphQL, SQL, REST) and many backends like SQLite, JSON, google sheets, MySQL, etc.
[1] https://github.com/roapi/roapi
- Who is using AXUM in production?
-
Ask HN: Best way to provide access to large data sets
For smaller datasets then anywhere up to a few mb which isn't so bad reasonable with an API but in theory for historic data it could be up to several gb. I've not seen datasette go that high (IIRC it's a 1000 row return limit by default).
That's what got me intrigued with Atlassians offering, as data lakes tend to be something internal to a company, not something I've ever seen offered as an interaction point to users.
I've also tested out roapi [1] which is nice if the data is in some structured format already (Parquet/JSON)
[1] https://github.com/roapi/roapi
-
"thread 'main' panicked at 'no CA certificates found'", when running application in docker container
https://github.com/roapi/roapi/issues/103?
- Roapi 0.9 release adds support for all cloud storage providers
-
SQLite-based databases on the Postgres protocol? Yes we can
Very cool and well executed project. Love the sprinkle of Rust in all the other companion projects as well :)
The ROAPI(https://github.com/roapi/roapi) project I built also happened to support a similar feature set, i.e. to expose sqlite through a variety of remote query interfaces including pg wire protocols, rest apis and graphqls.
- Using Rust to write a Data Pipeline. Thoughts. Musings.
-
PostgREST – Serve a RESTful API from Any Postgres Database
> why not just accept SQL and cut out all the unnecessary mapping?
You might be interested in what we're building: Seafowl, a database designed for running analytical SQL queries straight from the user's browser, with HTTP CDN-friendly caching [0]. It's a second iteration of the Splitgraph DDN [1] which we built on top of PostgreSQL (Seafowl is much faster for this use case, since it's based on Apache DataFusion + Parquet).
The tradeoff for allowing the client to run any SQL vs a limited API is that PostgREST-style queries have a fairly predictable and low overhead, but aren't as powerful as fully-fledged SQL with aggregations, joins, window functions and CTEs, which have their uses in interactive dashboards to reduce the amount of data that has to be processed on the client.
There's also ROAPI [2] which is a read-only SQL API that you can deploy in front of a database / other data source (though in case of using databases as a data source, it's only for tables that fit in memory).
[0] https://seafowl.io/
[1] https://www.splitgraph.com/connect
[2] https://github.com/roapi/roapi
-
Command-line data analytics made easy
It could be the NDJSON parser (DF source: [0]) or could be a variety of other factors. Looking at the ROAPI release archive [1], it doesn't ship with the definitive `columnq` binary from your comment, so it could also have something to do with compilation-time flags.
FWIW, we use the Parquet format with DataFusion and get very good speeds similar to DuckDB [2], e.g. 1.5s to run a more complex aggregation query `SELECT date_trunc('month', tpep_pickup_datetime) AS month, COUNT(*) AS total_trips, SUM(total_amount) FROM tripdata GROUP BY 1 ORDER BY 1 ASC)` on a 55M row subset of NY Taxi trip data.
[0]: https://github.com/apache/arrow-datafusion/blob/master/dataf...
[1]: https://github.com/roapi/roapi/releases/tag/roapi-v0.8.0
[2]: https://observablehq.com/@seafowl/benchmarks
delta-rs
- Delta-rs – a Rust-based implementation of deltalake
-
Delta Lake vs. Parquet: A Comparison
I work at Databricks, but am pretty must just an OSS nerd, mainly focusing on Delta Rust recently: https://github.com/delta-io/delta-rs
I did some keyword research and wrote this post cause lots of folks are doing searches for Delta Lake vs Parquet. I'm just trying to share a fair summary of the tradeoffs with folks who are doing this search. It's a popular post and that's why I figured I would share it here.
-
Working with Rust
Seeing a lot of great libraries coming out with python bindings in the data world e.g delta-rs Polars. I see it growing in this space as a C++ alternative
-
Ideas/Suggestions around setting up a data pipeline from scratch
If I’m not misunderstanding, you could both decode the gRPC protobuf AND write to delta lake in Rust. Tonic, Delta-rs.
-
Delta-rs with upserts
https://github.com/delta-io/delta-rs/issues/850 … looks like it’s on the roadmap!
-
Read and filter delta files on Azure from a .net application
Microsoft talk a lot about OneLake and that the delta file format will be the standard during the build conference. Is it only me that find it strange that their marketing team talks so much about the delta format when they do not even provide a library to work with the delta format from .net? It would be easy for them to maintain bindings to https://github.com/delta-io/delta-rs but also provide a reader that support V-Order https://learn.microsoft.com/en-us/fabric/data-engineering/delta-optimization-and-v-order?tabs=sparksql
-
Polars query engine 0.29.0 released
I know someone will be adding this on the python side in the coming weeks. On the rust side you can use delta-rs with polars. Though you would be compiling both arrow2 and arrow-rs, so that's quite heavy.
-
Delta Lake without Databricks?
You don’t need DBX to use Delta Lake. You can use S3 as the backend and just use the Python Delta Lake library. It works great! https://github.com/delta-io/delta-rs
-
Seeking Recommendations for a Master Data Management Tool
Maybe if I get some free time soon I can formalize into a working example. Been wanting an excuse to try similar concept in delta-rs and polars/duckdb vs databricks/spark vs iceberg/polars.
-
Opportunity to contribute to a popular Rust data project (delta-rs)
delta-rs is a native Rust library for Delta Lake. It's a better way to store data than Parquet files and is fundamentally important library for the Rust data ecosystem. It's tightly integrated with Polars and Datafusion and there is a lot of interesting Rust work to be done.
What are some alternatives?
php-parquet - PHP implementation for reading and writing Apache Parquet files/streams. NOTICE: Please migrate to https://github.com/codename-hub/php-parquet.
delta - An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs
qframe - Immutable data frame for Go
materialize - The data warehouse for operational workloads.
ballista - Distributed compute platform implemented in Rust, and powered by Apache Arrow.
fluvio - Lean and mean distributed stream processing system written in rust and web assembly.
kafka-delta-ingest - A highly efficient daemon for streaming data from Kafka into Delta Lake
datasette - An open source multi-tool for exploring and publishing data
delta-oss
polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust