connector-x
polars
connector-x | polars | |
---|---|---|
11 | 144 | |
1,786 | 26,378 | |
2.5% | 3.4% | |
9.1 | 10.0 | |
6 days ago | about 20 hours ago | |
Rust | Rust | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
connector-x
-
How moving from Pandas to Polars made me write better code without writing better code
This was originally a blocker, however, we managed to set up a multi-stage Docker build to build from source. Here is the Github issue where we, along with community members, managed to solve it.
-
I used multiprocessing and multithreading at the same time to drop the execution time of my code from 155+ seconds to just over 2+ seconds
There's packages like connector-x and polars that do a lot of what you're mentioning out of the box. I used these two to massively speed up an SQLalchemy + Pandas based ETL in the past as well.
-
Rust in Data Science?
Thanks for sharing connector-x, I will also start to use it. I wonder if there are a list of tools like that. I know Ruff, Polars, pydantic-core.
-
Querying Postgres Tables Directly from DuckDB
I was trying https://github.com/sfu-db/connector-x and hacking around with this https://github.com/spitz-dan-l/postgres-binary-parser but it turned out that a COPY to csv using asyncpg and then converting to parquet was the fastest.
-
An alternativt to TradingView ?
if you store the OHLC data in a relational database, use connector-x to load the data into pandas dataframe
-
Python and ETL
For SQL reading I'd really recommend connector-x, they do a great job preventing unneeded serialization and don't have to go through python.
- Fastest library to load data from DB to DataFrames
-
Waiting for your data loading from database to dataframes?
Indeed, currently we do not support persistent connections among different queries. We target more on the bulk loading scenario where the bottleneck is caused by the data size and the connection construction overhead is negligible. However, one possible solution to the problem is to expose our connection pool object that we use inside Rust to users, so the next call could reuse the same pool. We do not plan for this yet, but happy to see whether this is a common need! Feel free to open an issue in our github repo: https://github.com/sfu-db/connector-x
Feel free to ask any questions here or open an issue in our github repo: https://github.com/sfu-db/connector-x . You can also join our discord community: https://discord.com/invite/xwbkFNk and ask question under connector channel!
- ConnectorX: The fastest tool to load data from databases to dataframes
polars
-
Why Python's Integer Division Floors (2010)
This is because 0.1 is in actuality the floating point value value 0.1000000000000000055511151231257827021181583404541015625, and thus 1 divided by it is ever so slightly smaller than 10. Nevertheless, fpround(1 / fpround(1 / 10)) = 10 exactly.
I found out about this recently because in Polars I defined a // b for floats to be (a / b).floor(), which does return 10 for this computation. Since Python's correctly-rounded division is rather expensive, I chose to stick to this (more context: https://github.com/pola-rs/polars/issues/14596#issuecomment-...).
-
Polars
https://github.com/pola-rs/polars/releases/tag/py-0.19.0
-
Stuff I Learned during Hanukkah of Data 2023
That turned out to be related to pola-rs/polars#11912, and this linked comment provided a deceptively simple solution - use PARSE_DECLTYPES when creating the connection:
- Polars 0.20 Released
- Segunda linguagem
- Polars: Dataframes powered by a multithreaded query engine, written in Rust
- Summing columns in remote Parquet files using DuckDB
- Polars 0.34 is released. (A query engine focussing on DataFrame front ends)
What are some alternatives?
Rudderstack - Privacy and Security focused Segment-alternative, in Golang and React
vaex - Out-of-Core hybrid Apache Arrow/NumPy DataFrame for Python, ML, visualization and exploration of big tabular data at a billion rows per second 🚀
lightweight-charts - Performant financial charts built with HTML5 canvas
modin - Modin: Scale your Pandas workflows by changing a single line of code
mmr - Python based algorithmic trading platform for Interactive Brokers
datafusion - Apache DataFusion SQL Query Engine
postgres-binary-parser - Cython implementation of a parser for PostgreSQL's COPY WITH BINARY format
DataFrames.jl - In-memory tabular data in Julia
duckdb - DuckDB is an in-process SQL OLAP Database Management System
datatable - A Python package for manipulating 2-dimensional tabular data structures
datafusion-ballista - Apache Arrow Ballista Distributed Query Engine
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing