auto-attack
adversarial-robustness-toolbox
auto-attack | adversarial-robustness-toolbox | |
---|---|---|
3 | 8 | |
608 | 4,483 | |
- | 1.2% | |
0.0 | 9.7 | |
4 months ago | 1 day ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
auto-attack
-
DARPA Open Sources Resources to Aid Evaluation of Adversarial AI Defenses
I'm less familiar with poisoning, but at least for test-time robustness, the current benchmark for image classifiers is AutoAttack [0,1]. It's an ensemble of adaptive & parameter-free gradient-based and black-box attacks. Submitted academic work is typically considered incomplete without an evaluation on AA (and sometimes deepfool [2]). It is good to see that both are included in ART.
[0] https://arxiv.org/abs/2003.01690
[1] https://github.com/fra31/auto-attack
[2] https://arxiv.org/abs/1511.04599
-
[D] Testing a model's robustness to adversarial attacks
A better method is to use the AutoAttack from Croce et al. https://github.com/fra31/auto-attack which is much more robust to gradient masking. It's actually a combination of 3 attacks (2 white-box and 1 black box) with good default hyper-parameters. It's not perfect but it gives a more accurate robustness.
adversarial-robustness-toolbox
- [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models?
- [D] ML Researchers/Engineers in Industry: Why don't companies use open source models more often?
- [D]: How safe is it to just use a strangers Model?
-
[D] Does anyone care about adversarial attacks anymore?
Check out this project https://github.com/Trusted-AI/adversarial-robustness-toolbox
- adversarial-robustness-toolbox: Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
- Library for Machine Learning Security Evasion, Poisoning, Extraction, Inference
-
Introduction to Adversarial Machine Learning
Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference.
-
[D] Testing a model's robustness to adversarial attacks
Depending on what attacks you want I've found both https://github.com/cleverhans-lab/cleverhans and https://github.com/Trusted-AI/adversarial-robustness-toolbox to be useful.
What are some alternatives?
TextAttack - TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
DeepRobust - A pytorch adversarial library for attack and defense methods on images and graphs
KitanaQA - KitanaQA: Adversarial training and data augmentation for neural question-answering models
alpha-zero-boosted - A "build to learn" Alpha Zero implementation using Gradient Boosted Decision Trees (LightGBM)
alpha-beta-CROWN - alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, and 2023)
m2cgen - Transform ML models into a native code (Java, C, Python, Go, JavaScript, Visual Basic, C#, R, PowerShell, PHP, Dart, Haskell, Ruby, F#, Rust) with zero dependencies
waf-bypass - Check your WAF before an attacker does
Differential-Privacy-Guide - Differential Privacy Guide
gretel-synthetics - Synthetic data generators for structured and unstructured text, featuring differentially private learning.
unrpa - A program to extract files from the RPA archive format.