TextAttack
adversarial-robustness-toolbox
TextAttack | adversarial-robustness-toolbox | |
---|---|---|
3 | 8 | |
2,761 | 4,460 | |
1.6% | 1.2% | |
8.3 | 9.7 | |
about 1 month ago | 10 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TextAttack
-
Preprocessing methods besides stop words, regular expressions, lemmatization and stemming for an NLP classification problem
Could have a look at what's available in the augmentor here https://github.com/QData/TextAttack. I'm not experienced with NLP so I may be way off here
-
TextAttack VS OpenAttack - a user suggested alternative
2 projects | 6 Jul 2022
-
[D] Advanced Takeaways from fast.ai book
Text data augmentation - negate words, replace words with similes, perturb word embeddings (nice github repo for this)
adversarial-robustness-toolbox
- [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models?
- [D] ML Researchers/Engineers in Industry: Why don't companies use open source models more often?
- [D]: How safe is it to just use a strangers Model?
-
[D] Does anyone care about adversarial attacks anymore?
Check out this project https://github.com/Trusted-AI/adversarial-robustness-toolbox
- adversarial-robustness-toolbox: Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
- Library for Machine Learning Security Evasion, Poisoning, Extraction, Inference
-
Introduction to Adversarial Machine Learning
Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference.
-
[D] Testing a model's robustness to adversarial attacks
Depending on what attacks you want I've found both https://github.com/cleverhans-lab/cleverhans and https://github.com/Trusted-AI/adversarial-robustness-toolbox to be useful.
What are some alternatives?
TextFooler - A Model for Natural Language Attack on Text Classification and Inference
DeepRobust - A pytorch adversarial library for attack and defense methods on images and graphs
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
auto-attack - Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
OpenAttack - An Open-Source Package for Textual Adversarial Attack.
alpha-zero-boosted - A "build to learn" Alpha Zero implementation using Gradient Boosted Decision Trees (LightGBM)
m2cgen - Transform ML models into a native code (Java, C, Python, Go, JavaScript, Visual Basic, C#, R, PowerShell, PHP, Dart, Haskell, Ruby, F#, Rust) with zero dependencies
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
waf-bypass - Check your WAF before an attacker does
KitanaQA - KitanaQA: Adversarial training and data augmentation for neural question-answering models
Differential-Privacy-Guide - Differential Privacy Guide