AIJack
awesome-machine-unlearning
AIJack | awesome-machine-unlearning | |
---|---|---|
11 | 5 | |
325 | 602 | |
- | - | |
7.3 | 7.9 | |
14 days ago | 9 days ago | |
C++ | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AIJack
-
Protect your AI with AIJack - an easy-to-use open-source simulation tool for testing the security of your AI system against hijackers
AIJack is easy to use and can help you secure your AI system quickly. Check our documentation for more information and start securing your AI today with AIJack. Don't wait for a hijacker to compromise your AI - take action today and protect your system with AIJack.
-
How should I manage and develop my open-source project?
I have developed one OSS tool (AIJack), and I would like to ask how I manage it and where I should focus.
-
AIJack: I built an OSS framework for the attack and defense against Machine Learning
I want to share my project, AIJack, a security and privacy risk simulator for machine learning. Many papers show that machine learning is vulnerable to cyber-attacks and privacy violations. For example, hackers can reconstruct private training data from the trained model. To simulate such risks, AIJack allows you to experiment with various combinations of more than 30 attack and defense mechanisms, such as Model Inversion, Poisoning Attack, Evasion Attack, Federated Learning, Split Learning, Differential Privacy, and Homomorphic Encryption.
-
Privacy-Preserving Machine Learning with AIJack - 1: Federated Learning on PyTorch
Next, we will implement FedAVG, one of the most representative methods of Federated Learning. We use AIJack, an OSS, to simulate machine learning algorithms' security and privacy risks. AIJack supports both single-process and MPI as its backend.
-
[P] Let's Hijack AI! Security and Privacy Risk Simulator for Machine Learning
I have released v0.0.1-alpha of AIJack, an OSS framework to simulate various attacks and defenses against machine learning models. I have implemented more than 30 algorithms, such as Model Inversion, Poisoning Attack, Evasion Attack, Federated Learning, Split Learning, Differential Privacy, and Homomorphic Encryption. You can easily experiment with various combinations of attack and defense techniques. We will also support not only standard single-process but also MPI-backend.
I have developed a framework named AIJack to simulate various attacks against machine learning models, mainly based on PyTorch and sklearn. Currently, I have implemented more than 20 algorithms Federated Learning, Split Learning, Differential Privacy, Homomorphic Encryption, and other heuristic approaches. I am looking forward to your feedback!
- AIJack - Security and Privacy Risk Simulator for Machine Learning
- AIJack: Security and Privacy Risk Simulator for Machine Learning
-
Let's hijack AI! Security and Privacy Risk Simulator for Machine Learning
I have developed AIJack, which allows you to assess the privacy and security risks of machine learning algorithms such as Model Inversion, Poisoning Attack and Evasion Attack. AIJack also provides various defense techniques like Federated Learning, Split Learning, Differential Privacy, Homomorphic Encryption, and other heuristic approaches. You can easily experiment with various combinations of attacks and defenses.
-
Let's Hijack AI Security and Privacy Risk Simulator for Machine Learning
I have developed a framework named AIJack to simulate various attacks against machine learning models, mainly based on PyTorch and sklearn. Currently, I have implemented more than 20 algorithms! I am looking forward to your feedback!
code: https://github.com/Koukyosyumei/AIJack
documentation: https://koukyosyumei.github.io/AIJack/intro.html
awesome-machine-unlearning
-
[P] [R] Machine Unlearning Summary
Github Repo: https://github.com/tamlhp/awesome-machine-unlearning 📚 Notebook: https://www.kaggle.com/code/tamlhp/machine-unlearning-the-right-to-be-forgotten/
-
[R] A Survey of Machine Unlearning
Today, computer systems hold large amounts of personal data. Yet while such an abundance of data allows breakthroughs in artificial intelligence, and especially machine learning (ML), its existence can be a threat to user privacy, and it can weaken the bonds of trust between humans and AI. Recent regulations now require that, on request, private information about a user must be removed from both computer systems and from ML models, i.e. ``the right to be forgotten''). While removing data from back-end databases should be straightforward, it is not sufficient in the AI context as ML models often `remember' the old data. Contemporary adversarial attacks on trained models have proven that we can learn whether an instance or an attribute belonged to the training data. This phenomenon calls for a new paradigm, namely machine unlearning, to make ML models forget about particular data. It turns out that recent works on machine unlearning have not been able to completely solve the problem due to the lack of common frameworks and resources. Therefore, this paper aspires to present a comprehensive examination of machine unlearning's concepts, scenarios, methods, and applications. Specifically, as a category collection of cutting-edge studies, the intention behind this article is to serve as a comprehensive resource for researchers and practitioners seeking an introduction to machine unlearning and its formulations, design criteria, removal requests, algorithms, and applications. In addition, we aim to highlight the key findings, current trends, and new research areas that have not yet featured the use of machine unlearning but could benefit greatly from it. We hope this survey serves as a valuable resource for ML researchers and those seeking to innovate privacy technologies. Our resources are publicly available at this https URL.
-
Welcome!
Welcome to Machine unlearning, You can post all kinds of stuff about Machine unlearning here . Here is a great resource to get you started https://github.com/tamlhp/awesome-machine-unlearning
-
[P] [R] [D] Can Machine Actually Forget Your Data?
We also have a Github repo for this topic, please consider star if this topic piques your curiosity.
- [P] Awesome Machine Unlearning
What are some alternatives?
MetisFL - The first open Federated Learning framework implemented in C++ and Python.
differential-privacy-library - Diffprivlib: The IBM Differential Privacy Library
concrete - Concrete: TFHE Compiler that converts python programs into FHE equivalent
fides - The Privacy Engineering & Compliance Framework
TextAttack - TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
continual-pretraining-nlp-vision - Code to reproduce experiments from the paper "Continual Pre-Training Mitigates Forgetting in Language and Vision" https://arxiv.org/abs/2205.09357
mlattacks - Machine Learning Attack Series
course-content-dl - NMA deep learning course
adversarial-robustness-toolbox - Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
PyRedactKit - Python CLI tool to redact and un-redact sensitive data from text files. 🔐📝
federated-xgboost - Federated gradient boosted decision tree learning