Our great sponsors
-
dedupe
:id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
I think this problem is known as data deduplication, in particular, entity deduplication. I googled a bit and it seems approaches vary from manual deduplication to some sort of active learning (if I am not mistaken). I am also curios if pre-trained transformer-based cross encoders can provide any good results (they are trained on sentences I think, but may be worth a try). Another problem here is how to measure progress (compare different approaches)?