mnist_1_pt_2
label-errors
mnist_1_pt_2 | label-errors | |
---|---|---|
1 | 7 | |
16 | 176 | |
- | 4.5% | |
5.3 | 0.0 | |
9 months ago | over 1 year ago | |
Python | ||
GNU General Public License v3.0 only | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mnist_1_pt_2
-
Show HN: 78% MNIST accuracy using GZIP in under 10 lines of code
ben recht's kernel method implementation in 10 lines hits 98%
https://github.com/benjamin-recht/mnist_1_pt_2/tree/main
label-errors
-
Show HN: 78% MNIST accuracy using GZIP in under 10 lines of code
Sadly,there are several errors in the labeled data, so no one should get 100%.
See https://labelerrors.com/
-
Automated Data Quality at Scale
Sharing some context here: in grad school, I spent months writing custom data analysis code and training ML models to find errors in large-scale datasets like ImageNet, work that eventually resulted in this paper (https://arxiv.org/abs/2103.14749) and demo (https://labelerrors.com/).
Since then, I’ve been interested in building tools to automate this sort of analysis. We’ve finally gotten to the point where a web app can do automatically in a couple of hours what I spent months doing in Jupyter notebooks back in 2019—2020. It was really neat to see the software we built automatically produce the same figures and tables that are in our papers.
The blog post shared here is results-focused, talking about some of the data and dataset-level issues that a tool using data-centric AI algorithms can automatically find in ImageNet, which we used as a case study. Happy to answer any questions about the post or data-centric AI in general here!
P.S. all of our core algorithms are open-source, in case any of you are interested in checking out the code: https://github.com/cleanlab/cleanlab
-
Stanford Cars (cars196) contains many Fine-Grained Errors
I found these issues to be pretty interesting, yet I wasn't surprised. It's pretty well known that many common ML datasets exhibit thousands of errors.
-
[N] Fine-Tuning OpenAI Language Models with Noisily Labeled Data (37% error reduction)
we be benchmarked the minimum (lower bound) of error detection across the ten most commonly used real world ML datasets and found the lower bound is at least 50% accurate. You can see these errors yourself here: labelerrors.com (all found with cleanlab studio, a more advanced version of the algorithms in confident learning) and this was nominated for best paper award at NeurIPS 2021.
-
"I'm gonna make him a Neural Network he can't refuse" - Godfather of AI
twitter should use software that can detect label errors like this... FWIW even many curated ML benchmarks are full of mislabeled data: https://labelerrors.com/
-
How do we best practice preprocessing and data cleaning?
If doing ML, don't forget to check your data for label errors. See for example: https://labelerrors.com/
- How I found nearly 300,000 errors in MS COCO
What are some alternatives?
hlb-CIFAR10 - Train CIFAR-10 in <7 seconds on an A100, the current world record.
umap_paper_notebooks - Notebooks in support of the UMAP paper
mono - monorepo for personal projects, experiments, ..
techniques - Techniques for deep learning with satellite & aerial imagery
awesome-robotics - A curated list of awesome links and software libraries that are useful for robots.
cleanlab - The standard data-centric AI package for data quality and machine learning with messy, real-world data and labels.