polars
loadtxt
Our great sponsors
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
polars
-
Why Python's Integer Division Floors (2010)
This is because 0.1 is in actuality the floating point value value 0.1000000000000000055511151231257827021181583404541015625, and thus 1 divided by it is ever so slightly smaller than 10. Nevertheless, fpround(1 / fpround(1 / 10)) = 10 exactly.
I found out about this recently because in Polars I defined a // b for floats to be (a / b).floor(), which does return 10 for this computation. Since Python's correctly-rounded division is rather expensive, I chose to stick to this (more context: https://github.com/pola-rs/polars/issues/14596#issuecomment-...).
- Polars
-
Stuff I Learned during Hanukkah of Data 2023
That turned out to be related to pola-rs/polars#11912, and this linked comment provided a deceptively simple solution - use PARSE_DECLTYPES when creating the connection:
- Polars 0.20 Released
- Segunda linguagem
- Polars: Dataframes powered by a multithreaded query engine, written in Rust
- Summing columns in remote Parquet files using DuckDB
- Polars 0.34 is released. (A query engine focussing on DataFrame front ends)
loadtxt
-
What libraries do you miss from other languages?
It really depends on what part of Numpy you're using. You can easily leave Numpy's text parsing in the dust. And if you're doing element-wise operations on arrays, you can easily see 2-3x improvement with just numba.
-
Experience with heap bloat
Amdahl's Law will catch up with you really fast as you add threads with this strategy, but it's simple and is amenable to formats where you may have a delimiter in the middle of a record. For situations where you need maximum scaling and don't have the possibility of delimiters scattered into records, you can use the strategy I used to implement a faster numpy.loadtxt: https://github.com/saethlin/loadtxt/blob/master/src/inner.rs#L84 The general idea is that you divide the file among thread boundaries by splitting it on byte boundaries, then seeking from that byte offset to the end of the next record. This gets you non-interleaved sections so there's no duplicate parsing.
What are some alternatives?
vaex - Out-of-Core hybrid Apache Arrow/NumPy DataFrame for Python, ML, visualization and exploration of big tabular data at a billion rows per second 🚀
SaintCoinach - A .NET library written in C# for extracting game assets and reading game assets from Final Fantasy XIV: A Realm Reborn.
modin - Modin: Scale your Pandas workflows by changing a single line of code
tera - A template engine for Rust based on Jinja2/Django
arrow-datafusion - Apache DataFusion SQL Query Engine
plotters - A rust drawing library for high quality data plotting for both WASM and native, statically and realtimely 🦀 📈🚀
DataFrames.jl - In-memory tabular data in Julia
typed-html - Type checked JSX for Rust
datatable - A Python package for manipulating 2-dimensional tabular data structures
thirtyfour - Selenium WebDriver client for Rust, for automated testing of websites
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
okapi - OpenAPI (AKA Swagger) document generation for Rust projects