zingg
splink
Our great sponsors
zingg | splink | |
---|---|---|
23 | 16 | |
877 | 1,086 | |
2.3% | 8.7% | |
9.3 | 9.9 | |
4 days ago | 2 days ago | |
Java | Python | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zingg
-
Ask HN: What is the most impactful thing you've ever built?
As part of my data consulting, I struggled with identity resolution and started working on scalable no code identity resolution - https://github.com/zinggAI/zingg/ . It has pushed my limits as a software engineer and product builder, and I had to do a lot of learning to build it. Its cool to see people use Zingg in their workflows and save months of working on custom solutions. Big highlight has been North Carolina Open Campaign Data https://crossroads-cx.medium.com/building-open-access-to-nc-...
-
How to find open source data science python projects to contribute to?
Check https://github.com/zinggAI/zingg/. We recently added Python to our stack and are looking for help with building dbt-zingg python models, databricks-zingg python notebooks, python api, building a python based front end etc.
- Merging datasets
-
is it possible to "fuzzy match" or dedupe columns in Redshift?
If you are open to using a framework for this, check Zingg at https://github.com/zinggAI/zingg. It connects to Redshift, snowflake and other warehouses and can handle multiple columns
-
Show HN: Zingg – open-source entity resolution for single source of truth
Thanks for your support. Yes we do ship with some examples and their models which can be run out of the box. We have 3 customer demographic datasets and an ecommerce items matching across Google and Amazon. You can check them here https://github.com/zinggAI/zingg/tree/main/examples
-
Question about Github Referring Sites
I have an open source project hosted at https://github.com/zinggAI/zingg/.
- How do I promote the project appropriately?
-
GitHub Java Projects to Contribute
Check Zingg out at https://github.com/zinggAI/zingg and let me know if you would like to contribute
-
Match over 1 GB of data with inconsistent names
This is interesting, would love to get your feedback on Zingg(https://github.com/zinggAI/zingg) if you are upto it. Thanks!
-
Open source entity resolution - need your feedback!
I have released an open source entity resolution tool Zingg(https://github.com/zinggAI/zingg). Zingg uses Spark and ML to build single source of truth directly in the warehouse or the datalake. Would love to hear from the Reddit folks here what they think about it - do you find it useful? what can I do to make it better? any advice on the problem or the solution?
splink
- Splink: Fast, accurate, scalable probabilistic data linkage
-
Ask HN: What projects are you working on?
https://github.com/moj-analytical-services/splink
-
Record linkage/Entity linkage
Record linkage has been a big part of a project I've been working on for 6 months now. I personally think a great and free solution be using the splink package in Python which can handle 10+m rows which implements the Fellegi-Sunter model (equivalent to a naive-Bayes model) is the classical model in record linkage. It can be trained in an unsupervised manner using some initial parameter estimation (these are quite intuitive) and then expectation maximisation. The features in the model will be different pairwise string comparisons on your field of interest. These can include exact equality; edit distance comparisons like Levensthein distance and Jaro-Winkler; and phonetic comparisons like soundex and double metaphone. The splink pacakge will handle training the model and then all the graph theory at the end to connect all your links into clusters. All the details you'll need are in the links. https://www.robinlinacre.com/probabilistic\_linkage/ https://moj-analytical-services.github.io/splink/
-
What is the best approach to removing duplicate person records if the only identifier is person firstname middle name and last name? These names are entered in varying ways to the DB, thus they are free-fromatted.
https://moj-analytical-services.github.io/splink/ is a FOSS python package (but it runs against your db using SQL).
-
DuckDB – in-process SQL OLAP database management system
If you're curious, I've written a FOSS record linkage library that executes everything as SQL. It supports multiple SQL backends including DuckDB and Spark for scale, and runs faster than most competitors because it's able to leverage the speed of these backends: https://github.com/moj-analytical-services/splink
-
Ask HN: What have you created that deserves a second chance on HN?
Splink - a python library for probabilistic record linkage (fuzzy matching/entity resolution).
Splink is dramatically faster and works on much larger datasets than other open source libraries. I'm particularly proud of the fact we support multiple execution backends (at the moment, DuckDb Spark Athena and Sqlite, but additional adaptors are relatively straightforward to write).
We've had >4 million pypi downloads and it's used in government, academia and the private sector, often replacing extremely expensive proprietary solutions.
https://github.com/moj-analytical-services/splink
More info in blog posts here:
-
Conformed Dimensions problem that keeps recurring on every project
Splink is a SQL tool that can do this https://github.com/moj-analytical-services/splink
-
How do you join two sources with attributes that aren't identical?
Probabilistic record matching model such as a Fellegi-Sunter. Check out the splink package in Python.
-
Splink 3: Fast, accurate and scalable record linkage (entity resolution) in Python
Main docs here: https://moj-analytical-services.github.io/splink
-
Splink 3: Fast, accurate and scalable fuzzy record linkage in Python with support for multiple backends (FOSS)
It'd be great to see Splink add value in this area! Do give us a shout if you have any questions. The best place to post is on the Github discussions: https://github.com/moj-analytical-services/splink/discussions
What are some alternatives?
clrs
dedupe - :id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.
rumble - ⛈️ RumbleDB 1.21.0 "Hawthorn blossom" 🌳 for Apache Spark | Run queries on your large-scale, messy JSON-like data (JSON, text, CSV, Parquet, ROOT, AVRO, SVM...) | No install required (just a jar to download) | Declarative Machine Learning and more
libpostal - A C library for parsing/normalizing street addresses around the world. Powered by statistical NLP and open geo data.
CLRS - Algorithms implementation in C++ and solutions of questions (both code and math proof) from “Introduction to Algorithms” (3e) (CLRS) in LaTeX.
sqlglot - Python SQL Parser and Transpiler
skipledger - Differential privacy solution for maintaining and exposing information from evolving, append-only journals / ledgers.
entity-embed - PyTorch library for transforming entities like companies, products, etc. into vectors to support scalable Record Linkage / Entity Resolution using Approximate Nearest Neighbors.
yt-channels-DS-AI-ML-CS - A comprehensive list of 180+ YouTube Channels for Data Science, Data Engineering, Machine Learning, Deep learning, Computer Science, programming, software engineering, etc.
dblink - Distributed Bayesian Entity Resolution in Apache Spark
automount - Simple devd(8) based automounter for FreeBSD
KaithemAutomation - Pure Python, GUI-focused home automation/consumer grade SCADA