auto-attack
TextAttack
auto-attack | TextAttack | |
---|---|---|
3 | 3 | |
608 | 2,761 | |
- | 1.6% | |
0.0 | 8.3 | |
4 months ago | about 1 month ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
auto-attack
-
DARPA Open Sources Resources to Aid Evaluation of Adversarial AI Defenses
I'm less familiar with poisoning, but at least for test-time robustness, the current benchmark for image classifiers is AutoAttack [0,1]. It's an ensemble of adaptive & parameter-free gradient-based and black-box attacks. Submitted academic work is typically considered incomplete without an evaluation on AA (and sometimes deepfool [2]). It is good to see that both are included in ART.
[0] https://arxiv.org/abs/2003.01690
[1] https://github.com/fra31/auto-attack
[2] https://arxiv.org/abs/1511.04599
-
[D] Testing a model's robustness to adversarial attacks
A better method is to use the AutoAttack from Croce et al. https://github.com/fra31/auto-attack which is much more robust to gradient masking. It's actually a combination of 3 attacks (2 white-box and 1 black box) with good default hyper-parameters. It's not perfect but it gives a more accurate robustness.
TextAttack
-
Preprocessing methods besides stop words, regular expressions, lemmatization and stemming for an NLP classification problem
Could have a look at what's available in the augmentor here https://github.com/QData/TextAttack. I'm not experienced with NLP so I may be way off here
-
TextAttack VS OpenAttack - a user suggested alternative
2 projects | 6 Jul 2022
-
[D] Advanced Takeaways from fast.ai book
Text data augmentation - negate words, replace words with similes, perturb word embeddings (nice github repo for this)
What are some alternatives?
adversarial-robustness-toolbox - Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
TextFooler - A Model for Natural Language Attack on Text Classification and Inference
DeepRobust - A pytorch adversarial library for attack and defense methods on images and graphs
KitanaQA - KitanaQA: Adversarial training and data augmentation for neural question-answering models
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
alpha-beta-CROWN - alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, and 2023)
OpenAttack - An Open-Source Package for Textual Adversarial Attack.
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
advertorch - A Toolbox for Adversarial Robustness Research
AIJack - Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)