- adversarial-robustness-toolbox VS auto-attack
- adversarial-robustness-toolbox VS DeepRobust
- adversarial-robustness-toolbox VS m2cgen
- adversarial-robustness-toolbox VS alpha-zero-boosted
- adversarial-robustness-toolbox VS unrpa
- adversarial-robustness-toolbox VS TextAttack
- adversarial-robustness-toolbox VS mortar
Adversarial-robustness-toolbox Alternatives
Similar projects and alternatives to adversarial-robustness-toolbox
-
auto-attack
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
-
DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
-
Scout APM
Less time debugging, more time building. Scout APM allows you to find and fix performance issues with no hassle. Now with error monitoring and external services monitoring, Scout is a developer's best friend when it comes to application development.
-
m2cgen
Transform ML models into a native code (Java, C, Python, Go, JavaScript, Visual Basic, C#, R, PowerShell, PHP, Dart, Haskell, Ruby, F#, Rust) with zero dependencies
-
alpha-zero-boosted
A "build to learn" Alpha Zero implementation using Gradient Boosted Decision Trees (LightGBM)
-
unrpa
A program to extract files from the RPA archive format.
-
TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
-
counterfit
a CLI that provides a generic automation layer for assessing the security of ML models
-
SonarQube
Static code analysis for 29 languages.. Your projects are multi-language. So is SonarQube analysis. Find Bugs, Vulnerabilities, Security Hotspots, and Code Smells so you can release quality code every time. Get started analyzing your projects today for free.
-
mortar
evasion technique to defeat and divert detection and prevention of security products (AV/EDR/XDR) (by 0xsp-SRD)
adversarial-robustness-toolbox reviews and mentions
- adversarial-robustness-toolbox: Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
- Library for Machine Learning Security Evasion, Poisoning, Extraction, Inference
-
Introduction to Adversarial Machine Learning
Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference.
-
[D] Testing a model's robustness to adversarial attacks
Depending on what attacks you want I've found both https://github.com/cleverhans-lab/cleverhans and https://github.com/Trusted-AI/adversarial-robustness-toolbox to be useful.
Stats
Trusted-AI/adversarial-robustness-toolbox is an open source project licensed under MIT License which is an OSI approved license.
Popular Comparisons
Are you hiring? Post a new remote job listing for free.