[D] Hashing techniques to compare large datasets?

This page summarizes the projects mentioned and recommended in the original post on /r/MachineLearning

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • annoy

    Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk

  • There is a technique called similarity hashing to find sets of similar items in a large database in O(1) time. (Indexing costs O(n)). Maybe you find the Annoy library useful: https://github.com/spotify/annoy

  • dedup

    Find duplicate text files.

  • There is actually a whole family of hashing functions called locality sensitive hashing functions (LSH) that have the property that the likelihood of a hash collision is proportional to the similarity of the hashed data values. I’ve used Simhash myself for textual similarity, but LSHs can be used for finding similar images, audio, or other data types.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts