adversarial-robustness-toolbox VS auto-attack

Compare adversarial-robustness-toolbox vs auto-attack and see what are their differences.

auto-attack

Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks" (by fra31)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
adversarial-robustness-toolbox auto-attack
8 3
4,447 607
2.6% -
9.7 0.0
6 days ago 3 months ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

adversarial-robustness-toolbox

Posts with mentions or reviews of adversarial-robustness-toolbox. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-01-22.

auto-attack

Posts with mentions or reviews of auto-attack. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-12-21.
  • DARPA Open Sources Resources to Aid Evaluation of Adversarial AI Defenses
    2 projects | news.ycombinator.com | 21 Dec 2021
    I'm less familiar with poisoning, but at least for test-time robustness, the current benchmark for image classifiers is AutoAttack [0,1]. It's an ensemble of adaptive & parameter-free gradient-based and black-box attacks. Submitted academic work is typically considered incomplete without an evaluation on AA (and sometimes deepfool [2]). It is good to see that both are included in ART.

    [0] https://arxiv.org/abs/2003.01690

    [1] https://github.com/fra31/auto-attack

    [2] https://arxiv.org/abs/1511.04599

  • [D] Testing a model's robustness to adversarial attacks
    2 projects | /r/MachineLearning | 30 Jan 2021
    A better method is to use the AutoAttack from Croce et al. https://github.com/fra31/auto-attack which is much more robust to gradient masking. It's actually a combination of 3 attacks (2 white-box and 1 black box) with good default hyper-parameters. It's not perfect but it gives a more accurate robustness.

What are some alternatives?

When comparing adversarial-robustness-toolbox and auto-attack you can also consider the following projects:

DeepRobust - A pytorch adversarial library for attack and defense methods on images and graphs

TextAttack - TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/

alpha-zero-boosted - A "build to learn" Alpha Zero implementation using Gradient Boosted Decision Trees (LightGBM)

KitanaQA - KitanaQA: Adversarial training and data augmentation for neural question-answering models

m2cgen - Transform ML models into a native code (Java, C, Python, Go, JavaScript, Visual Basic, C#, R, PowerShell, PHP, Dart, Haskell, Ruby, F#, Rust) with zero dependencies

alpha-beta-CROWN - alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, and 2023)

waf-bypass - Check your WAF before an attacker does

Differential-Privacy-Guide - Differential Privacy Guide

gretel-synthetics - Synthetic data generators for structured and unstructured text, featuring differentially private learning.

unrpa - A program to extract files from the RPA archive format.