splink
postgres_scanner | splink | |
---|---|---|
6 | 16 | |
179 | 1,091 | |
5.0% | 2.8% | |
9.3 | 9.9 | |
15 days ago | 6 days ago | |
C++ | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
postgres_scanner
-
Connect ODBC Databases to DuckDB
I've created an ODBC DuckDB extension to query any database that has an ODBC driver. It's modeled after the fantastic official Postgres scanner extension https://github.com/duckdblabs/postgres_scanner.
It supports fetching rowsets in batches to minimize network overhead and defaults to the default DuckDB vector size of 2048.
I've tested it against the IBM DB2 & Postgres ODBC drivers and will continue to test and add support for all major databases. If you've got one you'd like to see let me know in the comments.
I've got plenty of improvements in the pipeline including:
-
DuckDB 0.7.0
It's not a dumb question at all. I'm pretty knowledgeable with DBs and still find it very difficult to understand how many of these front-end/pass-through engines work.
Checkout Postgres Foreign Data Wrappers. That might be the most well known approach for accessing one database through another. The Supabase team wrote an interesting piece about this recently.
https://supabase.com/blog/postgres-foreign-data-wrappers-rus...
You might also want to try out duckdb's approach to reading other DBs (or DB files). They talk about how they can "import" a sqlite DB in the above 0.7.0 announcement, but also have some other examples in their duckdblabs github project. Check out their "...-scanner" repos:
https://github.com/duckdblabs/postgres_scanner
https://github.com/duckdblabs/sqlite_scanner
-
DuckDB – in-process SQL OLAP database management system
Doesn't postgres have a columnar option? If so, you could prob get better performance for your analytical interactions if you switched some tables to columnar.
Otherwise check out postgres scanner. https://github.com/duckdblabs/postgres_scanner
- DuckDB on YugabyteDB
-
Notes on the SQLite DuckDB Paper
DuckDB can actually read SQLite or Postgres directly! In the SQLite case, something like Litestream plus DuckDB could work really well!
Also, with Pyarrow's help, DuckDB can already do this with Delta tables!
https://github.com/duckdblabs/sqlite_scanner
https://github.com/duckdblabs/postgresscanner
-
Friendlier SQL with DuckDB
Interesting thought! I have not tried this yet so I only have a guess as an answer. Could you export the data as SQL statements and then run those statements on DuckDB? That may be easier to set up, but may take longer to run...
DuckDB also has the ability to read Postgres data directly, and there is a Postgres FDW that can read from DuckDB!
https://github.com/duckdblabs/postgresscanner
https://github.com/alitrack/duckdb_fdw
splink
- Splink: Fast, accurate, scalable probabilistic data linkage
-
Ask HN: What projects are you working on?
https://github.com/moj-analytical-services/splink
-
Record linkage/Entity linkage
Record linkage has been a big part of a project I've been working on for 6 months now. I personally think a great and free solution be using the splink package in Python which can handle 10+m rows which implements the Fellegi-Sunter model (equivalent to a naive-Bayes model) is the classical model in record linkage. It can be trained in an unsupervised manner using some initial parameter estimation (these are quite intuitive) and then expectation maximisation. The features in the model will be different pairwise string comparisons on your field of interest. These can include exact equality; edit distance comparisons like Levensthein distance and Jaro-Winkler; and phonetic comparisons like soundex and double metaphone. The splink pacakge will handle training the model and then all the graph theory at the end to connect all your links into clusters. All the details you'll need are in the links. https://www.robinlinacre.com/probabilistic\_linkage/ https://moj-analytical-services.github.io/splink/
-
What is the best approach to removing duplicate person records if the only identifier is person firstname middle name and last name? These names are entered in varying ways to the DB, thus they are free-fromatted.
https://moj-analytical-services.github.io/splink/ is a FOSS python package (but it runs against your db using SQL).
-
DuckDB – in-process SQL OLAP database management system
If you're curious, I've written a FOSS record linkage library that executes everything as SQL. It supports multiple SQL backends including DuckDB and Spark for scale, and runs faster than most competitors because it's able to leverage the speed of these backends: https://github.com/moj-analytical-services/splink
-
Ask HN: What have you created that deserves a second chance on HN?
Splink - a python library for probabilistic record linkage (fuzzy matching/entity resolution).
Splink is dramatically faster and works on much larger datasets than other open source libraries. I'm particularly proud of the fact we support multiple execution backends (at the moment, DuckDb Spark Athena and Sqlite, but additional adaptors are relatively straightforward to write).
We've had >4 million pypi downloads and it's used in government, academia and the private sector, often replacing extremely expensive proprietary solutions.
https://github.com/moj-analytical-services/splink
More info in blog posts here:
-
Conformed Dimensions problem that keeps recurring on every project
Splink is a SQL tool that can do this https://github.com/moj-analytical-services/splink
-
How do you join two sources with attributes that aren't identical?
Probabilistic record matching model such as a Fellegi-Sunter. Check out the splink package in Python.
-
Splink 3: Fast, accurate and scalable record linkage (entity resolution) in Python
Main docs here: https://moj-analytical-services.github.io/splink
-
Splink 3: Fast, accurate and scalable fuzzy record linkage in Python with support for multiple backends (FOSS)
It'd be great to see Splink add value in this area! Do give us a shout if you have any questions. The best place to post is on the Github discussions: https://github.com/moj-analytical-services/splink/discussions
What are some alternatives?
odbc-scanner-duckdb-extension - A DuckDB extension to read data directly from databases supporting the ODBC interface
zingg - Scalable identity resolution, entity resolution, data mastering and deduplication using ML
ClickBench - ClickBench: a Benchmark For Analytical Databases
dedupe - :id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.
sqlite_scanner - DuckDB extension to read and write to SQLite databases
libpostal - A C library for parsing/normalizing street addresses around the world. Powered by statistical NLP and open geo data.
budibase - Budibase is an open-source low code platform that helps you build internal tools in minutes 🚀
sqlglot - Python SQL Parser and Transpiler
go-duckdb - go-duckdb provides a database/sql driver for the DuckDB database engine.
entity-embed - PyTorch library for transforming entities like companies, products, etc. into vectors to support scalable Record Linkage / Entity Resolution using Approximate Nearest Neighbors.
duckdb - DuckDB is an in-process SQL OLAP Database Management System
dblink - Distributed Bayesian Entity Resolution in Apache Spark