duckdf
splink
duckdf | splink | |
---|---|---|
3 | 16 | |
41 | 1,091 | |
- | 2.8% | |
0.0 | 9.9 | |
4 months ago | 5 days ago | |
R | Python | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
duckdf
-
DuckDB – in-process SQL OLAP database management system
Quite a while ago, when duckdb was just a duckling, I wrote an R package that supported direct manipulation of R dataframes using SQL.[1] duckdb was the engine for this.
The approach was never as fast as data.table but did approach the speed of dplyr for more complex queries.
Life had other things in store for me and I haven’t touched this library for a while now.
At the time there was no Julia connector for duckdb, but now that there is, I’d like to try this approach in that language.
[1] https://github.com/phillc73/duckdf
-
ClickHouse as an alternative to Elasticsearch for log storage and analysis
Yeah, I agree sqldf is quite slow. Fair point.
As you've seen, duckdb registers an "R data frame as a virtual table." I'm not sure what they mean by "yet" either.
Of course it is possible to write an R dataframe to an on-disk duckdb table, if that's what you want to do.
There are some simple benchmarks on the bottom of the duckdf README[1]. Essentially I found for basic SQL SELECT queries, dplyr is quicker, but for much more complex queries, the duckdf/duckdb combination performs better.
If you really want speed of course, just use data.table.
[1] https://github.com/phillc73/duckdf
-
Julia 1.6: what has changed since Julia 1.0?
That's a really good point that I'd not really thought about. I'd never really considered the difference between calling just functions versus macros.
Thinking about Query.jl and DataFramesMeta.jl, and I am for sure not an expert in either, I can't specifically speak to your `head` example, but other base functions can be combined with macros. For example, see the LINQ examples from DataFramesMeta.jl[1] where `mean` is being used. Or again the LINQ style examples in Query.jl[2], where `descending` is used in the first example, or `length` later in the Grouping examples.
Is that the kind of thing you meant?
For whatever reason, with the way my brain is wired, the LINQ style of query just works for me. I have never directly used LINQ, but do have some SQL experience. In fact, I wrote some dinky little wrapper functions[3] around duckdb[4] so I could directly query R dataframes and datatables with SQL using that backend, rather than sqldf[5].
[1] https://juliadata.github.io/DataFramesMeta.jl/stable/#@linq-...
[2] https://www.queryverse.org/Query.jl/stable/linqquerycommands...
[3] https://github.com/phillc73/duckdf
[4] https://duckdb.org/
[5] https://cran.r-project.org/web/packages/sqldf/index.html
splink
- Splink: Fast, accurate, scalable probabilistic data linkage
-
Ask HN: What projects are you working on?
https://github.com/moj-analytical-services/splink
-
Record linkage/Entity linkage
Record linkage has been a big part of a project I've been working on for 6 months now. I personally think a great and free solution be using the splink package in Python which can handle 10+m rows which implements the Fellegi-Sunter model (equivalent to a naive-Bayes model) is the classical model in record linkage. It can be trained in an unsupervised manner using some initial parameter estimation (these are quite intuitive) and then expectation maximisation. The features in the model will be different pairwise string comparisons on your field of interest. These can include exact equality; edit distance comparisons like Levensthein distance and Jaro-Winkler; and phonetic comparisons like soundex and double metaphone. The splink pacakge will handle training the model and then all the graph theory at the end to connect all your links into clusters. All the details you'll need are in the links. https://www.robinlinacre.com/probabilistic\_linkage/ https://moj-analytical-services.github.io/splink/
-
What is the best approach to removing duplicate person records if the only identifier is person firstname middle name and last name? These names are entered in varying ways to the DB, thus they are free-fromatted.
https://moj-analytical-services.github.io/splink/ is a FOSS python package (but it runs against your db using SQL).
-
DuckDB – in-process SQL OLAP database management system
If you're curious, I've written a FOSS record linkage library that executes everything as SQL. It supports multiple SQL backends including DuckDB and Spark for scale, and runs faster than most competitors because it's able to leverage the speed of these backends: https://github.com/moj-analytical-services/splink
-
Ask HN: What have you created that deserves a second chance on HN?
Splink - a python library for probabilistic record linkage (fuzzy matching/entity resolution).
Splink is dramatically faster and works on much larger datasets than other open source libraries. I'm particularly proud of the fact we support multiple execution backends (at the moment, DuckDb Spark Athena and Sqlite, but additional adaptors are relatively straightforward to write).
We've had >4 million pypi downloads and it's used in government, academia and the private sector, often replacing extremely expensive proprietary solutions.
https://github.com/moj-analytical-services/splink
More info in blog posts here:
-
Conformed Dimensions problem that keeps recurring on every project
Splink is a SQL tool that can do this https://github.com/moj-analytical-services/splink
-
How do you join two sources with attributes that aren't identical?
Probabilistic record matching model such as a Fellegi-Sunter. Check out the splink package in Python.
-
Splink 3: Fast, accurate and scalable record linkage (entity resolution) in Python
Main docs here: https://moj-analytical-services.github.io/splink
-
Splink 3: Fast, accurate and scalable fuzzy record linkage in Python with support for multiple backends (FOSS)
It'd be great to see Splink add value in this area! Do give us a shout if you have any questions. The best place to post is on the Github discussions: https://github.com/moj-analytical-services/splink/discussions
What are some alternatives?
tidyquery - Query R data frames with SQL
zingg - Scalable identity resolution, entity resolution, data mastering and deduplication using ML
Typesense - Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences
dedupe - :id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.
julia - The Julia Programming Language
libpostal - A C library for parsing/normalizing street addresses around the world. Powered by statistical NLP and open geo data.
loki - Like Prometheus, but for logs.
sqlglot - Python SQL Parser and Transpiler
Makie.jl - Interactive data visualizations and plotting in Julia
entity-embed - PyTorch library for transforming entities like companies, products, etc. into vectors to support scalable Record Linkage / Entity Resolution using Approximate Nearest Neighbors.
MeiliSearch - A lightning-fast search API that fits effortlessly into your apps, websites, and workflow
dblink - Distributed Bayesian Entity Resolution in Apache Spark