cleanlab VS token-label-error-benchmarks

Compare cleanlab vs token-label-error-benchmarks and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
cleanlab token-label-error-benchmarks
69 2
8,673 3
6.0% -
9.4 10.0
3 days ago over 1 year ago
Python Jupyter Notebook
GNU Affero General Public License v3.0 GNU Affero General Public License v3.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cleanlab

Posts with mentions or reviews of cleanlab. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-27.

token-label-error-benchmarks

Posts with mentions or reviews of token-label-error-benchmarks. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-10-12.
  • New paper on Automatically Detecting Label Errors in Entity Recognition Data
    1 project | /r/datasets | 27 Oct 2022
    Benchmarking code: https://github.com/cleanlab/token-label-error-benchmarks
    4 projects | /r/LanguageTechnology | 12 Oct 2022
    Blogpost: https://cleanlab.ai/blog/entity-recognition/ Paper: https://arxiv.org/abs/2210.03920 Tutorial: https://docs.cleanlab.ai/stable/tutorials/token_classification.html Benchmarking code: https://github.com/cleanlab/token-label-error-benchmarks Source code: https://github.com/cleanlab/cleanlab Example entity recognition model: https://github.com/cleanlab/examples/blob/master/entity_recognition/entity_recognition_training.ipynb

What are some alternatives?

When comparing cleanlab and token-label-error-benchmarks you can also consider the following projects:

alibi-detect - Algorithms for outlier, adversarial and drift detection

examples - Notebooks demonstrating example applications of the cleanlab library

label-studio - Label Studio is a multi-type data labeling and annotation tool with standardized output format

argilla - Argilla is a collaboration platform for AI engineers and domain experts that require high-quality outputs, full data ownership, and overall efficiency.

labelflow - The open platform for image labelling

karateclub - Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

SSL4MIS - Semi Supervised Learning for Medical Image Segmentation, a collection of literature reviews and code implementations.

susi - SuSi: Python package for unsupervised, supervised and semi-supervised self-organizing maps (SOM)

pigeonXT - 🐦 Quickly annotate data from the comfort of your Jupyter notebook

refinery - The data scientist's open-source choice to scale, assess and maintain natural language data. Treat training data like a software artifact.

snorkel - A system for quickly generating training data with weak supervision

AFFiNE - There can be more than Notion and Miro. AFFiNE(pronounced [ə‘fain]) is a next-gen knowledge base that brings planning, sorting and creating all together. Privacy first, open-source, customizable and ready to use.