TextAttack VS auto-attack

Compare TextAttack vs auto-attack and see what are their differences.

TextAttack

TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/ (by QData)

auto-attack

Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks" (by fra31)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
TextAttack auto-attack
3 3
2,761 607
1.3% -
8.3 0.0
about 1 month ago 3 months ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

TextAttack

Posts with mentions or reviews of TextAttack. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-07-06.

auto-attack

Posts with mentions or reviews of auto-attack. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-12-21.
  • DARPA Open Sources Resources to Aid Evaluation of Adversarial AI Defenses
    2 projects | news.ycombinator.com | 21 Dec 2021
    I'm less familiar with poisoning, but at least for test-time robustness, the current benchmark for image classifiers is AutoAttack [0,1]. It's an ensemble of adaptive & parameter-free gradient-based and black-box attacks. Submitted academic work is typically considered incomplete without an evaluation on AA (and sometimes deepfool [2]). It is good to see that both are included in ART.

    [0] https://arxiv.org/abs/2003.01690

    [1] https://github.com/fra31/auto-attack

    [2] https://arxiv.org/abs/1511.04599

  • [D] Testing a model's robustness to adversarial attacks
    2 projects | /r/MachineLearning | 30 Jan 2021
    A better method is to use the AutoAttack from Croce et al. https://github.com/fra31/auto-attack which is much more robust to gradient masking. It's actually a combination of 3 attacks (2 white-box and 1 black box) with good default hyper-parameters. It's not perfect but it gives a more accurate robustness.

What are some alternatives?

When comparing TextAttack and auto-attack you can also consider the following projects:

TextFooler - A Model for Natural Language Attack on Text Classification and Inference

adversarial-robustness-toolbox - Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

DeepRobust - A pytorch adversarial library for attack and defense methods on images and graphs

pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]

KitanaQA - KitanaQA: Adversarial training and data augmentation for neural question-answering models

OpenAttack - An Open-Source Package for Textual Adversarial Attack.

alpha-beta-CROWN - alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, and 2023)

spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python

advertorch - A Toolbox for Adversarial Robustness Research

AIJack - Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)