SaaSHub helps you find the best software and product alternatives Learn more →
Label-errors Alternatives
Similar projects and alternatives to label-errors
-
cleanlab
The standard data-centric AI package for data quality and machine learning with messy, real-world data and labels.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
awesome-robotics
A curated list of awesome links and software libraries that are useful for robots. (by ahundt)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
label-errors reviews and mentions
-
Show HN: 78% MNIST accuracy using GZIP in under 10 lines of code
Sadly,there are several errors in the labeled data, so no one should get 100%.
See https://labelerrors.com/
-
Automated Data Quality at Scale
Sharing some context here: in grad school, I spent months writing custom data analysis code and training ML models to find errors in large-scale datasets like ImageNet, work that eventually resulted in this paper (https://arxiv.org/abs/2103.14749) and demo (https://labelerrors.com/).
Since then, I’ve been interested in building tools to automate this sort of analysis. We’ve finally gotten to the point where a web app can do automatically in a couple of hours what I spent months doing in Jupyter notebooks back in 2019—2020. It was really neat to see the software we built automatically produce the same figures and tables that are in our papers.
The blog post shared here is results-focused, talking about some of the data and dataset-level issues that a tool using data-centric AI algorithms can automatically find in ImageNet, which we used as a case study. Happy to answer any questions about the post or data-centric AI in general here!
P.S. all of our core algorithms are open-source, in case any of you are interested in checking out the code: https://github.com/cleanlab/cleanlab
-
Stanford Cars (cars196) contains many Fine-Grained Errors
I found these issues to be pretty interesting, yet I wasn't surprised. It's pretty well known that many common ML datasets exhibit thousands of errors.
-
[N] Fine-Tuning OpenAI Language Models with Noisily Labeled Data (37% error reduction)
we be benchmarked the minimum (lower bound) of error detection across the ten most commonly used real world ML datasets and found the lower bound is at least 50% accurate. You can see these errors yourself here: labelerrors.com (all found with cleanlab studio, a more advanced version of the algorithms in confident learning) and this was nominated for best paper award at NeurIPS 2021.
-
"I'm gonna make him a Neural Network he can't refuse" - Godfather of AI
twitter should use software that can detect label errors like this... FWIW even many curated ML benchmarks are full of mislabeled data: https://labelerrors.com/
-
How do we best practice preprocessing and data cleaning?
If doing ML, don't forget to check your data for label errors. See for example: https://labelerrors.com/
- How I found nearly 300,000 errors in MS COCO
-
A note from our sponsor - SaaSHub
www.saashub.com | 1 May 2024
Stats
cleanlab/label-errors is an open source project licensed under GNU General Public License v3.0 only which is an OSI approved license.
Sponsored