cleverhans VS shapash

Compare cleverhans vs shapash and see what are their differences.

cleverhans

An adversarial example library for constructing attacks, building defenses, and benchmarking both (by cleverhans-lab)

shapash

🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models (by MAIF)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
cleverhans shapash
3 8
6,008 2,640
0.0% 1.3%
0.0 8.6
about 1 year ago 22 days ago
Jupyter Notebook Jupyter Notebook
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cleverhans

Posts with mentions or reviews of cleverhans. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-10-28.

shapash

Posts with mentions or reviews of shapash. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-10-28.

What are some alternatives?

When comparing cleverhans and shapash you can also consider the following projects:

deepchecks - Deepchecks: Tests for Continuous Validation of ML Models & Data. Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling to thoroughly test your data and models from research to production.

shap - A game theoretic approach to explain the output of any machine learning model.

advertorch - A Toolbox for Adversarial Robustness Research

interpret - Fit interpretable models. Explain blackbox machine learning.

aws-security-workshops - A collection of the latest AWS Security workshops

LIME - Tutorial notebooks on explainable Machine Learning with LIME (Original work: https://arxiv.org/abs/1602.04938)

AIX360 - Interpretability and explainability of data and machine learning models

trulens - Evaluation and Tracking for LLM Experiments

uncertainty-toolbox - Uncertainty Toolbox: a Python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization

GlassCode - This plugin allows you to make JetBrains IDEs to be fully transparent while keeping the code sharp and bright.

delve - PyTorch model training and layer saturation monitor

CARLA - CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms