cleanlab VS label-errors

Compare cleanlab vs label-errors and see what are their differences.

label-errors

🛠️ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSet (by cleanlab)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
cleanlab label-errors
69 7
8,673 176
6.0% 4.5%
9.4 0.0
4 days ago over 1 year ago
Python
GNU Affero General Public License v3.0 GNU General Public License v3.0 only
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cleanlab

Posts with mentions or reviews of cleanlab. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-27.

label-errors

Posts with mentions or reviews of label-errors. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-20.
  • Show HN: 78% MNIST accuracy using GZIP in under 10 lines of code
    5 projects | news.ycombinator.com | 20 Sep 2023
    Sadly,there are several errors in the labeled data, so no one should get 100%.

    See https://labelerrors.com/

  • Automated Data Quality at Scale
    2 projects | news.ycombinator.com | 27 Jul 2023
    Sharing some context here: in grad school, I spent months writing custom data analysis code and training ML models to find errors in large-scale datasets like ImageNet, work that eventually resulted in this paper (https://arxiv.org/abs/2103.14749) and demo (https://labelerrors.com/).

    Since then, I’ve been interested in building tools to automate this sort of analysis. We’ve finally gotten to the point where a web app can do automatically in a couple of hours what I spent months doing in Jupyter notebooks back in 2019—2020. It was really neat to see the software we built automatically produce the same figures and tables that are in our papers.

    The blog post shared here is results-focused, talking about some of the data and dataset-level issues that a tool using data-centric AI algorithms can automatically find in ImageNet, which we used as a case study. Happy to answer any questions about the post or data-centric AI in general here!

    P.S. all of our core algorithms are open-source, in case any of you are interested in checking out the code: https://github.com/cleanlab/cleanlab

  • Stanford Cars (cars196) contains many Fine-Grained Errors
    1 project | /r/datasets | 24 May 2023
    I found these issues to be pretty interesting, yet I wasn't surprised. It's pretty well known that many common ML datasets exhibit thousands of errors.
  • [N] Fine-Tuning OpenAI Language Models with Noisily Labeled Data (37% error reduction)
    2 projects | /r/MachineLearning | 3 May 2023
    we be benchmarked the minimum (lower bound) of error detection across the ten most commonly used real world ML datasets and found the lower bound is at least 50% accurate. You can see these errors yourself here: labelerrors.com (all found with cleanlab studio, a more advanced version of the algorithms in confident learning) and this was nominated for best paper award at NeurIPS 2021.
  • "I'm gonna make him a Neural Network he can't refuse" - Godfather of AI
    1 project | /r/datascience | 5 Jan 2023
    twitter should use software that can detect label errors like this... FWIW even many curated ML benchmarks are full of mislabeled data: https://labelerrors.com/
  • How do we best practice preprocessing and data cleaning?
    1 project | /r/datascience | 29 Nov 2022
    If doing ML, don't forget to check your data for label errors. See for example: https://labelerrors.com/
  • How I found nearly 300,000 errors in MS COCO
    1 project | /r/deeplearning | 26 Jul 2022

What are some alternatives?

When comparing cleanlab and label-errors you can also consider the following projects:

alibi-detect - Algorithms for outlier, adversarial and drift detection

mnist_1_pt_2 - 1.2% test error on MNIST using only least squares and numpy calls.

label-studio - Label Studio is a multi-type data labeling and annotation tool with standardized output format

hlb-CIFAR10 - Train CIFAR-10 in <7 seconds on an A100, the current world record.

argilla - Argilla is a collaboration platform for AI engineers and domain experts that require high-quality outputs, full data ownership, and overall efficiency.

mono - monorepo for personal projects, experiments, ..

labelflow - The open platform for image labelling

techniques - Techniques for deep learning with satellite & aerial imagery

karateclub - Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

awesome-robotics - A curated list of awesome links and software libraries that are useful for robots.

SSL4MIS - Semi Supervised Learning for Medical Image Segmentation, a collection of literature reviews and code implementations.

umap_paper_notebooks - Notebooks in support of the UMAP paper