label-errors

🛠️ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSet (by cleanlab)

Label-errors Alternatives

Similar projects and alternatives to label-errors

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better label-errors alternative or higher similarity.

label-errors reviews and mentions

Posts with mentions or reviews of label-errors. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-20.
  • Show HN: 78% MNIST accuracy using GZIP in under 10 lines of code
    5 projects | news.ycombinator.com | 20 Sep 2023
    Sadly,there are several errors in the labeled data, so no one should get 100%.

    See https://labelerrors.com/

  • Automated Data Quality at Scale
    2 projects | news.ycombinator.com | 27 Jul 2023
    Sharing some context here: in grad school, I spent months writing custom data analysis code and training ML models to find errors in large-scale datasets like ImageNet, work that eventually resulted in this paper (https://arxiv.org/abs/2103.14749) and demo (https://labelerrors.com/).

    Since then, I’ve been interested in building tools to automate this sort of analysis. We’ve finally gotten to the point where a web app can do automatically in a couple of hours what I spent months doing in Jupyter notebooks back in 2019—2020. It was really neat to see the software we built automatically produce the same figures and tables that are in our papers.

    The blog post shared here is results-focused, talking about some of the data and dataset-level issues that a tool using data-centric AI algorithms can automatically find in ImageNet, which we used as a case study. Happy to answer any questions about the post or data-centric AI in general here!

    P.S. all of our core algorithms are open-source, in case any of you are interested in checking out the code: https://github.com/cleanlab/cleanlab

  • Stanford Cars (cars196) contains many Fine-Grained Errors
    1 project | /r/datasets | 24 May 2023
    I found these issues to be pretty interesting, yet I wasn't surprised. It's pretty well known that many common ML datasets exhibit thousands of errors.
  • [N] Fine-Tuning OpenAI Language Models with Noisily Labeled Data (37% error reduction)
    2 projects | /r/MachineLearning | 3 May 2023
    we be benchmarked the minimum (lower bound) of error detection across the ten most commonly used real world ML datasets and found the lower bound is at least 50% accurate. You can see these errors yourself here: labelerrors.com (all found with cleanlab studio, a more advanced version of the algorithms in confident learning) and this was nominated for best paper award at NeurIPS 2021.
  • "I'm gonna make him a Neural Network he can't refuse" - Godfather of AI
    1 project | /r/datascience | 5 Jan 2023
    twitter should use software that can detect label errors like this... FWIW even many curated ML benchmarks are full of mislabeled data: https://labelerrors.com/
  • How do we best practice preprocessing and data cleaning?
    1 project | /r/datascience | 29 Nov 2022
    If doing ML, don't forget to check your data for label errors. See for example: https://labelerrors.com/
  • How I found nearly 300,000 errors in MS COCO
    1 project | /r/deeplearning | 26 Jul 2022
  • A note from our sponsor - SaaSHub
    www.saashub.com | 1 May 2024
    SaaSHub helps you find the best software and product alternatives Learn more →

Stats

Basic label-errors repo stats
7
176
0.0
over 1 year ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com