Show HN: Want something better than k-means? Try BanditPAM

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • BanditPAM

    BanditPAM C++ implementation and Python package

  • Thanks for bug report and repro steps! I've filed this issue at https://github.com/motiwari/BanditPAM/issues/244 on our repo.

    I suspect that this is because the scikit-learn implementation of KMeans subsamples the data and uses some highly-optimized data structures for larger datasets. I've asked the team to see how we can use some of those techniques in BanditPAM and will update the Github repo as we learn more and improve our implementation.

  • river

    🌊 Online machine learning in Python

  • Hey, great work. Do you think this algorithm would be amenable to be done online? I'm the author of River (https://riverml.xyz) where we're looking for good online clustering algorithms.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • bolt

    10x faster matrix and vector operations (by dblalock)

  • > frown on that sort of dataset

    That example was definitely contrived and designed to strongly illustrate the point. I'll counter slightly that non-peaky topologies aren't uncommon, but they're unlikely to look anything that would push KMedoids to a pathological state rather than just a slightly worse state ("worse" assuming that KMeans is the right choice for a given problem).

    > worth pointing out .. data reference

    Totally agreed. I hope my answer didn't come across as too negative. It's good work, and everyone else was talking about the positives, so I just didn't want to waste too much time echoing again that while getting the other points across.

    > bolt reference

    https://github.com/dblalock/bolt

    They say as much in their paper, but they aren't the first vector quantization library by any stretch. Their contributions are, roughly:

    1. If you're careful selecting the right binning strategy then you can cancel out a meaningful amount of discretization error.

    2. If you do that, you can afford to choose parameters that fit everything nicely into AVX2 machine words, turning 100s of branching instructions into 1-4 instructions.

    3. Doing some real-world tests to show that (1-2) matter.

    Last I checked their code wasn't very effective for the places I wanted to apply it, but the paper is pretty solid. I'd replace it with a faster KMeans approximation less likely to crash on big data (maybe even initializing with KMedoids :) ), and if the thing you're quantizing is trainable with some sort of gradient update step then you should do a few optimization passes in the discretized form as well.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Want something better than k-means? Try BanditPAM (github.com/motiwari)

    1 project | /r/linux | 27 Jun 2023
  • [Q] How should I perform clustering on angular data?

    2 projects | /r/statistics | 22 May 2023
  • Show HN: Want something better than k-means? Try BanditPAM

    1 project | /r/patient_hackernews | 5 Apr 2023
  • Show HN: Want something better than k-means? Try BanditPAM

    1 project | /r/hackernews | 5 Apr 2023
  • [D] Is it possible to update random forest parameters with new data instead of retraining on all data?

    1 project | /r/MachineLearning | 17 Jan 2023