privacy
Library for training machine learning models with privacy for training data (by tensorflow)
adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams (by Trusted-AI)
privacy | adversarial-robustness-toolbox | |
---|---|---|
2 | 8 | |
1,935 | 4,839 | |
0.6% | 1.2% | |
7.7 | 9.5 | |
11 days ago | 7 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
privacy
Posts with mentions or reviews of privacy.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-03-17.
adversarial-robustness-toolbox
Posts with mentions or reviews of adversarial-robustness-toolbox.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-01-22.
- [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models?
- [D] ML Researchers/Engineers in Industry: Why don't companies use open source models more often?
- [D]: How safe is it to just use a strangers Model?
-
[D] Does anyone care about adversarial attacks anymore?
Check out this project https://github.com/Trusted-AI/adversarial-robustness-toolbox
- adversarial-robustness-toolbox: Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
- Library for Machine Learning Security Evasion, Poisoning, Extraction, Inference
-
Introduction to Adversarial Machine Learning
Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference.
-
[D] Testing a model's robustness to adversarial attacks
Depending on what attacks you want I've found both https://github.com/cleverhans-lab/cleverhans and https://github.com/Trusted-AI/adversarial-robustness-toolbox to be useful.
What are some alternatives?
When comparing privacy and adversarial-robustness-toolbox you can also consider the following projects:
tf-encrypted - A Framework for Encrypted Machine Learning in TensorFlow
DeepRobust - A pytorch adversarial library for attack and defense methods on images and graphs