auto-attack

Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks" (by fra31)

Auto-attack Alternatives

Similar projects and alternatives to auto-attack based on common topics and language

  • adversarial-robustness-toolbox

    Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

  • TextAttack

    TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • DeepRobust

    A pytorch adversarial library for attack and defense methods on images and graphs

  • KitanaQA

    KitanaQA: Adversarial training and data augmentation for neural question-answering models (by searchableai)

  • alpha-beta-CROWN

    alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, and 2023)

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better auto-attack alternative or higher similarity.

auto-attack reviews and mentions

Posts with mentions or reviews of auto-attack. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-12-21.
  • DARPA Open Sources Resources to Aid Evaluation of Adversarial AI Defenses
    2 projects | news.ycombinator.com | 21 Dec 2021
    I'm less familiar with poisoning, but at least for test-time robustness, the current benchmark for image classifiers is AutoAttack [0,1]. It's an ensemble of adaptive & parameter-free gradient-based and black-box attacks. Submitted academic work is typically considered incomplete without an evaluation on AA (and sometimes deepfool [2]). It is good to see that both are included in ART.

    [0] https://arxiv.org/abs/2003.01690

    [1] https://github.com/fra31/auto-attack

    [2] https://arxiv.org/abs/1511.04599

  • [D] Testing a model's robustness to adversarial attacks
    2 projects | /r/MachineLearning | 30 Jan 2021
    A better method is to use the AutoAttack from Croce et al. https://github.com/fra31/auto-attack which is much more robust to gradient masking. It's actually a combination of 3 attacks (2 white-box and 1 black box) with good default hyper-parameters. It's not perfect but it gives a more accurate robustness.

Stats

Basic auto-attack repo stats
3
607
0.0
3 months ago

fra31/auto-attack is an open source project licensed under MIT License which is an OSI approved license.

The primary programming language of auto-attack is Python.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com