adversarial-robustness-toolbox
cleverhans
Our great sponsors
adversarial-robustness-toolbox | cleverhans | |
---|---|---|
8 | 3 | |
4,460 | 6,079 | |
2.9% | 1.2% | |
9.7 | 0.0 | |
5 days ago | 18 days ago | |
Python | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
adversarial-robustness-toolbox
- [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models?
- [D] ML Researchers/Engineers in Industry: Why don't companies use open source models more often?
- [D]: How safe is it to just use a strangers Model?
-
[D] Does anyone care about adversarial attacks anymore?
Check out this project https://github.com/Trusted-AI/adversarial-robustness-toolbox
- adversarial-robustness-toolbox: Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
- Library for Machine Learning Security Evasion, Poisoning, Extraction, Inference
-
Introduction to Adversarial Machine Learning
Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference.
-
[D] Testing a model's robustness to adversarial attacks
Depending on what attacks you want I've found both https://github.com/cleverhans-lab/cleverhans and https://github.com/Trusted-AI/adversarial-robustness-toolbox to be useful.
cleverhans
-
Clever Hans (Intelligence Misatributon)
I only knew of this story from looking up the name of this library on adversarial DL https://github.com/cleverhans-lab/cleverhans
- [D] DL Practitioners, Do You Use Layer Visualization Tools s.a GradCam in Your Process?
-
[D] Does anyone care about adversarial attacks anymore?
I feel as though this area has not received much attention over the last couple of years. The CleverHans project has gone stale and I haven't heard of many new results recently. Has the community lost interest in this area? Did we decide that adversarial attacks aren't such a problem in practical applications?
What are some alternatives?
DeepRobust - A pytorch adversarial library for attack and defense methods on images and graphs
deepchecks - Deepchecks: Tests for Continuous Validation of ML Models & Data. Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling to thoroughly test your data and models from research to production.
auto-attack - Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
advertorch - A Toolbox for Adversarial Robustness Research
TextAttack - TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
AIX360 - Interpretability and explainability of data and machine learning models
alpha-zero-boosted - A "build to learn" Alpha Zero implementation using Gradient Boosted Decision Trees (LightGBM)
aws-security-workshops - A collection of the latest AWS Security workshops
m2cgen - Transform ML models into a native code (Java, C, Python, Go, JavaScript, Visual Basic, C#, R, PowerShell, PHP, Dart, Haskell, Ruby, F#, Rust) with zero dependencies
uncertainty-toolbox - Uncertainty Toolbox: a Python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization
waf-bypass - Check your WAF before an attacker does
TorchDrift - Drift Detection for your PyTorch Models