TextAttack
advertorch
TextAttack | advertorch | |
---|---|---|
3 | 1 | |
2,761 | 1,271 | |
1.3% | 0.5% | |
8.3 | 0.0 | |
about 1 month ago | 8 months ago | |
Python | Jupyter Notebook | |
MIT License | GNU Lesser General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TextAttack
-
Preprocessing methods besides stop words, regular expressions, lemmatization and stemming for an NLP classification problem
Could have a look at what's available in the augmentor here https://github.com/QData/TextAttack. I'm not experienced with NLP so I may be way off here
-
TextAttack VS OpenAttack - a user suggested alternative
2 projects | 6 Jul 2022
-
[D] Advanced Takeaways from fast.ai book
Text data augmentation - negate words, replace words with similes, perturb word embeddings (nice github repo for this)
advertorch
What are some alternatives?
TextFooler - A Model for Natural Language Attack on Text Classification and Inference
cleverhans - An adversarial example library for constructing attacks, building defenses, and benchmarking both
adversarial-robustness-toolbox - Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
nlpaug - Data augmentation for NLP
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
mlattacks - Machine Learning Attack Series
OpenAttack - An Open-Source Package for Textual Adversarial Attack.
AugMax - [NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.
auto-attack - Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
timm-vis - Visualizer for PyTorch image models
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
KitanaQA - KitanaQA: Adversarial training and data augmentation for neural question-answering models