polars
datafusion
polars | datafusion | |
---|---|---|
149 | 61 | |
31,656 | 6,667 | |
1.6% | 2.4% | |
10.0 | 10.0 | |
5 days ago | 14 days ago | |
Rust | Rust | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
polars
-
Using Polars in Rust for high-performance data analysis
If you want to get into Polars, the library is very well documented, and I’d recommend you check out their getting started tutorial, their API docs, and when you’re all set up, you can also check out their Cookbooks to learn about many of the standard operations within Polars.
-
Why Polars rewrote its Arrow string data type
This is false. The polars api has used smart string for a long time.
https://github.com/pola-rs/polars/blob/32a2325b55f9bce81d019...
- Polars releases v1.0.0 – a Pandas alternative
- Polars Releases v1.0.0
- Big Data Is Dead
-
Why Python's Integer Division Floors (2010)
This is because 0.1 is in actuality the floating point value value 0.1000000000000000055511151231257827021181583404541015625, and thus 1 divided by it is ever so slightly smaller than 10. Nevertheless, fpround(1 / fpround(1 / 10)) = 10 exactly.
I found out about this recently because in Polars I defined a // b for floats to be (a / b).floor(), which does return 10 for this computation. Since Python's correctly-rounded division is rather expensive, I chose to stick to this (more context: https://github.com/pola-rs/polars/issues/14596#issuecomment-...).
-
Polars
https://github.com/pola-rs/polars/releases/tag/py-0.19.0
-
Stuff I Learned during Hanukkah of Data 2023
That turned out to be related to pola-rs/polars#11912, and this linked comment provided a deceptively simple solution - use PARSE_DECLTYPES when creating the connection:
- Polars 0.20 Released
- Segunda linguagem
datafusion
-
Ask HN: Who wants to be hired? (February 2025)
Remote: Yes
Willing to relocate: Yes
Technologies: Rust, Nodejs, Javascript, Typescript, Golang
Résumé/CV: https://drive.google.com/drive/folders/1ecTn700lcmt8cqlnBTtm...
Email: [email protected]
Github: https://github.com/jatin510
Info: Hi, I'm Jagdish Parihar! A Backend Engineer with 4+ years of experience building scalable systems and microservices using Rust, Node.js, and Golang. I've contributed to open-source projects like Apache DataFusion and thrive on solving complex backend challenges.
I'm exploring the opportunity to be working in the DB based startups. I am looking for an entry to be an engineer who will work on databases. Currently, I am contributing to open source, looking for part-time/full-time working with databases.
Datafusion contributions: https://github.com/apache/datafusion/pulls?q=is%3Apr+author%...
Datafusion comet contributions: https://github.com/apache/datafusion-comet/pulls?q=is%3Apr+a...
Let’s connect!
- Apache DataFusion
-
How to build a new Harlequin adapter with Poetry
Harlequin is a TUI client for SQL databases known for its light-weight extensive support for SQL databases. It is a versatile tool for data exploration and analysis workflows. Harlequin provides an interactive SQL editor with features like autocomplete, syntax highlighting, and query history. It also has a results viewer that can display large result sets. However, Harlequin did not have a DataFusion adapter before. Thankfully, it was really easy to add one.
-
Why you should keep an eye on Apache DataFusion and its community.
In case you don't know what Apache DataFusion is, here's the high-level blurb.
-
Make Rust Object Oriented with the dual-trait pattern
I've invented 😎 this dual-trait pattern for the purposes of the logical planner, as seen in this merged PR. The problem was that the nodes in the plan (filter, select, etc.) had to support at the same time:
- Pg_lakehouse: A DuckDB Alternative in Postgres
-
Velox: Meta's Unified Execution Engine [pdf]
Python's Substrait seems like the biggest/most-used competitor-ish out there. I'd love some compare & contrast; my sense is that Substrait has a smaller ambition, and more wants to be a language for talking about execution rather than a full on execution engine. https://github.com/substrait-io/substrait
We can also see from the DataFusion discussion that they too see themselves as a bit of a Velox competitor. https://github.com/apache/arrow-datafusion/discussions/6441
-
What I Talk About When I Talk About Query Optimizer (Part 1): IR Design
Agree, substrait is a really cool project! Related: if you like substrait you might want to check out datafusion too. The project is a query execution engine built on top of Apache Arrow (with SQL parser, query planner & optimizer, execution engine, extensible user defined functions, among others) and it implements a substrait provider and consumer: https://github.com/apache/arrow-datafusion/tree/main/datafus...
-
DuckDB performance improvements with the latest release
The draft contains some preliminary benchmark results, comparing it to DuckDB.
https://github.com/apache/arrow-datafusion/issues/6782
- Apache Arrow DataFusion
What are some alternatives?
datatable - A Python package for manipulating 2-dimensional tabular data structures
ClickHouse - ClickHouse® is a real-time analytics database management system
modin - Modin: Scale your Pandas workflows by changing a single line of code
DuckDB - DuckDB is an analytical in-process SQL database management system
DataFrames.jl - In-memory tabular data in Julia
databend - 𝗗𝗮𝘁𝗮, 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 & 𝗔𝗜. Modern alternative to Snowflake. Cost-effective and simple for massive-scale analytics. https://databend.com
vaex - Out-of-Core hybrid Apache Arrow/NumPy DataFrame for Python, ML, visualization and exploration of big tabular data at a billion rows per second 🚀
db-benchmark - reproducible benchmark of database-like ops
Apache Arrow - Apache Arrow is the universal columnar format and multi-language toolbox for fast data interchange and in-memory analytics
arrow2 - Transmute-free Rust library to work with the Arrow format
PyO3 - Rust bindings for the Python interpreter
fluvio - Lean and mean distributed stream processing system written in rust and web assembly. Alternative to Kafka + Flink in one.