differential-privacy-library
awesome-machine-unlearning
differential-privacy-library | awesome-machine-unlearning | |
---|---|---|
2 | 5 | |
834 | 744 | |
1.4% | - | |
5.2 | 8.5 | |
about 2 months ago | 3 months ago | |
Python | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
differential-privacy-library
-
Well, crackers.
Differential privacy. Basically i wanted to create a randomly generated database file, akin to medical records, create a Private Aggregation of Teacher Ensembles algorithms based on 20-60% of its content and then use this teacher model on the other 80-40% of database which was just a plaintext, not that that matters. The problem is, I've barely got ideas on how it all works, and the one example I've found used Cryptonumeric's library called cn.protect. And that went like I've already described. I've fallen back on practical part of the paper and found another way of getting any practical usage as the assignment requires and now am trying to use https://github.com/IBM/differential-privacy-library and the example on 30s guide to instead make the practical part about choosing epsilon ( a measure of how much information you can give away as a result of one query on the database to a third malicious party) by tracking associated accuracy of result dataset compared to original. I hope I'll manage to edit the code to accept my text file after parsing it through into ndarray from txt, separating the last column to use as a target and going from there.
-
Differential Privacy project on Python
IBM's Diffprivlib is a well-documented implementation of differential privacy in Python. Source code and getting started documentation is available on the IBM differential-privacy-library Github repository.
awesome-machine-unlearning
-
[P] [R] Machine Unlearning Summary
Github Repo: https://github.com/tamlhp/awesome-machine-unlearning 📚 Notebook: https://www.kaggle.com/code/tamlhp/machine-unlearning-the-right-to-be-forgotten/
-
[R] A Survey of Machine Unlearning
Today, computer systems hold large amounts of personal data. Yet while such an abundance of data allows breakthroughs in artificial intelligence, and especially machine learning (ML), its existence can be a threat to user privacy, and it can weaken the bonds of trust between humans and AI. Recent regulations now require that, on request, private information about a user must be removed from both computer systems and from ML models, i.e. ``the right to be forgotten''). While removing data from back-end databases should be straightforward, it is not sufficient in the AI context as ML models often `remember' the old data. Contemporary adversarial attacks on trained models have proven that we can learn whether an instance or an attribute belonged to the training data. This phenomenon calls for a new paradigm, namely machine unlearning, to make ML models forget about particular data. It turns out that recent works on machine unlearning have not been able to completely solve the problem due to the lack of common frameworks and resources. Therefore, this paper aspires to present a comprehensive examination of machine unlearning's concepts, scenarios, methods, and applications. Specifically, as a category collection of cutting-edge studies, the intention behind this article is to serve as a comprehensive resource for researchers and practitioners seeking an introduction to machine unlearning and its formulations, design criteria, removal requests, algorithms, and applications. In addition, we aim to highlight the key findings, current trends, and new research areas that have not yet featured the use of machine unlearning but could benefit greatly from it. We hope this survey serves as a valuable resource for ML researchers and those seeking to innovate privacy technologies. Our resources are publicly available at this https URL.
-
Welcome!
Welcome to Machine unlearning, You can post all kinds of stuff about Machine unlearning here . Here is a great resource to get you started https://github.com/tamlhp/awesome-machine-unlearning
-
[P] [R] [D] Can Machine Actually Forget Your Data?
We also have a Github repo for this topic, please consider star if this topic piques your curiosity.
- [P] Awesome Machine Unlearning
What are some alternatives?
PyDP - The Python Differential Privacy Library. Built on top of: https://github.com/google/differential-privacy
AIJack - Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
data-science-ipython-notebooks - Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.
course-content-dl - NMA deep learning course
fides - The Privacy Engineering & Compliance Framework
continual-pretraining-nlp-vision - Code to reproduce experiments from the paper "Continual Pre-Training Mitigates Forgetting in Language and Vision" https://arxiv.org/abs/2205.09357
PrivacyEngCollabSpace - Privacy Engineering Collaboration Space
keras - Deep Learning for humans [Moved to: https://github.com/keras-team/keras]
PyRedactKit - Python CLI tool to redact and un-redact sensitive data from text files. 🔐📝
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Keras - Deep Learning for humans