connector-x
duckdb
connector-x | duckdb | |
---|---|---|
11 | 52 | |
1,786 | 16,749 | |
2.5% | 4.5% | |
9.1 | 10.0 | |
6 days ago | 7 days ago | |
Rust | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
connector-x
-
How moving from Pandas to Polars made me write better code without writing better code
This was originally a blocker, however, we managed to set up a multi-stage Docker build to build from source. Here is the Github issue where we, along with community members, managed to solve it.
-
I used multiprocessing and multithreading at the same time to drop the execution time of my code from 155+ seconds to just over 2+ seconds
There's packages like connector-x and polars that do a lot of what you're mentioning out of the box. I used these two to massively speed up an SQLalchemy + Pandas based ETL in the past as well.
-
Rust in Data Science?
Thanks for sharing connector-x, I will also start to use it. I wonder if there are a list of tools like that. I know Ruff, Polars, pydantic-core.
-
Querying Postgres Tables Directly from DuckDB
I was trying https://github.com/sfu-db/connector-x and hacking around with this https://github.com/spitz-dan-l/postgres-binary-parser but it turned out that a COPY to csv using asyncpg and then converting to parquet was the fastest.
-
An alternativt to TradingView ?
if you store the OHLC data in a relational database, use connector-x to load the data into pandas dataframe
-
Python and ETL
For SQL reading I'd really recommend connector-x, they do a great job preventing unneeded serialization and don't have to go through python.
- Fastest library to load data from DB to DataFrames
-
Waiting for your data loading from database to dataframes?
Indeed, currently we do not support persistent connections among different queries. We target more on the bulk loading scenario where the bottleneck is caused by the data size and the connection construction overhead is negligible. However, one possible solution to the problem is to expose our connection pool object that we use inside Rust to users, so the next call could reuse the same pool. We do not plan for this yet, but happy to see whether this is a common need! Feel free to open an issue in our github repo: https://github.com/sfu-db/connector-x
Feel free to ask any questions here or open an issue in our github repo: https://github.com/sfu-db/connector-x . You can also join our discord community: https://discord.com/invite/xwbkFNk and ask question under connector channel!
- ConnectorX: The fastest tool to load data from databases to dataframes
duckdb
- 🪄 DuckDB sql hack : get things SORTED w/ constraint CHECK
- DuckDB: Move to push-based execution model (2021)
-
DuckDB performance improvements with the latest release
I'm not sure if the fix is reassuring or not: https://github.com/duckdb/duckdb/pull/9411/files
-
Building a Distributed Data Warehouse Without Data Lakes
It's an interesting question!
The problem is that the data is spread everywhere - no choice about that. So with that in mind, how do you query that data? Today, the idea is that you HAVE to put it into a central location. With tools like Bacalhau[1] and DuckDB [2], you no longer have to - a single query can be sharded amongst all your data - EFFECTIVELY giving you a lot of what you want from a data lake.
It's not a replacement, but if you can do a few of these items WITHOUT moving the data, you will be able to see really significant cost and time savings.
[1] https://github.com/bacalhau-project/bacalhau
[2] https://github.com/duckdb/duckdb
- DuckDB 0.9.0
-
Push or Pull, is this a question?
[4] Switch to Push-Based Execution Model by Mytherin · Pull Request #2393 · duckdb/duckdb (github.com)
-
Show HN: Hydra 1.0 – open-source column-oriented Postgres
it depends on your query obviously.
In general, I did very deep benchmarking of pg, clickhouse and duckdb, and I sure didn't make stupid mistakes like this: https://news.ycombinator.com/item?id=36990831
My dataset has 50B rows and 2tb of data, and I think columnar dbs are very overhiped and I chose pg because:
- pg performance is acceptable, maybe 2-3x times slower than clickhouse and duckdb on some queries if pg is configured correctly and run on compressed storage
- clickhouse and duckdb start falling apart very fast because they specialized on very narrow type of queries: https://github.com/ClickHouse/ClickHouse/issues/47520 https://github.com/ClickHouse/ClickHouse/issues/47521 https://github.com/duckdb/duckdb/discussions/6696
-
🦆 Effortless Data Quality w/duckdb on GitHub ♾️
This action installs duckdb with the version provided in input.
-
Using SQL inside Python pipelines with Duckdb, Glaredb (and others?)
Duckdb: https://github.com/duckdb/duckdb - seems pretty popular, been keeping an eye on this for close to a year now.
-
CSV or Parquet File Format
The Parquet-Go library is very complex, not yet success to use it. So I ask whether DuckDB can provide API https://github.com/duckdb/duckdb/issues/7776
What are some alternatives?
polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust
ClickHouse - ClickHouse® is a free analytics DBMS for big data
Rudderstack - Privacy and Security focused Segment-alternative, in Golang and React
sqlite-worker - A simple, and persistent, SQLite database for Web and Workers.
lightweight-charts - Performant financial charts built with HTML5 canvas
datasette - An open source multi-tool for exploring and publishing data
mmr - Python based algorithmic trading platform for Interactive Brokers
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
postgres-binary-parser - Cython implementation of a parser for PostgreSQL's COPY WITH BINARY format
metabase-clickhouse-driver - ClickHouse database driver for the Metabase business intelligence front-end
datafusion-ballista - Apache Arrow Ballista Distributed Query Engine
datafusion - Apache DataFusion SQL Query Engine