cannoli
polars
cannoli | polars | |
---|---|---|
1 | 144 | |
771 | 26,218 | |
- | 2.9% | |
10.0 | 10.0 | |
over 5 years ago | 4 days ago | |
Rust | Rust | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cannoli
-
Modern Python Performance Considerations
This is a great read, and it's fantastic to see all the work being done to evaluate and improve the language!
Shameless plug alert: the dynamic-nature of the language is actually something that I had studied a few years back [1]. Particularly the variable and object attribute look ups! My work was just a master's thesis, so we didn't go too deep into more tricky dynamic aspects of the language (e.g. eval, which we restricted entirely). But we did see performance improvements by restricting the language in certain ways that aid in static analysis, which allowed for more performant runtime code. But for those interested, the abstract of my thesis [2] gives more insight into what we were evaluating.
Our results showed that restricting dynamic code (code that is constructed at run time from other source code) and dynamic objects (mutation of the structure of classes and objects at run time) significantly improved the performance of our benchmarks.
[1]: https://github.com/joncatanio/cannoli
[2]: https://digitalcommons.calpoly.edu/theses/1886/
polars
-
Why Python's Integer Division Floors (2010)
This is because 0.1 is in actuality the floating point value value 0.1000000000000000055511151231257827021181583404541015625, and thus 1 divided by it is ever so slightly smaller than 10. Nevertheless, fpround(1 / fpround(1 / 10)) = 10 exactly.
I found out about this recently because in Polars I defined a // b for floats to be (a / b).floor(), which does return 10 for this computation. Since Python's correctly-rounded division is rather expensive, I chose to stick to this (more context: https://github.com/pola-rs/polars/issues/14596#issuecomment-...).
-
Polars
https://github.com/pola-rs/polars/releases/tag/py-0.19.0
-
Stuff I Learned during Hanukkah of Data 2023
That turned out to be related to pola-rs/polars#11912, and this linked comment provided a deceptively simple solution - use PARSE_DECLTYPES when creating the connection:
- Polars 0.20 Released
- Segunda linguagem
- Polars: Dataframes powered by a multithreaded query engine, written in Rust
- Summing columns in remote Parquet files using DuckDB
- Polars 0.34 is released. (A query engine focussing on DataFrame front ends)
What are some alternatives?
pymartini - A Cython port of Martini for fast RTIN terrain mesh generation
vaex - Out-of-Core hybrid Apache Arrow/NumPy DataFrame for Python, ML, visualization and exploration of big tabular data at a billion rows per second 🚀
modin - Modin: Scale your Pandas workflows by changing a single line of code
uvloop - Ultra fast asyncio event loop.
datafusion - Apache DataFusion SQL Query Engine
femtolisp - a lightweight, robust, scheme-like lisp implementation
DataFrames.jl - In-memory tabular data in Julia
yaegi - Yaegi is Another Elegant Go Interpreter
datatable - A Python package for manipulating 2-dimensional tabular data structures
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
db-benchmark - reproducible benchmark of database-like ops