differential-privacy-library VS awesome-machine-unlearning

Compare differential-privacy-library vs awesome-machine-unlearning and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
differential-privacy-library awesome-machine-unlearning
2 5
779 609
1.3% -
4.8 7.9
10 days ago 24 days ago
Python Jupyter Notebook
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

differential-privacy-library

Posts with mentions or reviews of differential-privacy-library. We have used some of these posts to build our list of alternatives and similar projects.
  • Well, crackers.
    1 project | /r/ProgrammerHumor | 22 Nov 2021
    Differential privacy. Basically i wanted to create a randomly generated database file, akin to medical records, create a Private Aggregation of Teacher Ensembles algorithms based on 20-60% of its content and then use this teacher model on the other 80-40% of database which was just a plaintext, not that that matters. The problem is, I've barely got ideas on how it all works, and the one example I've found used Cryptonumeric's library called cn.protect. And that went like I've already described. I've fallen back on practical part of the paper and found another way of getting any practical usage as the assignment requires and now am trying to use https://github.com/IBM/differential-privacy-library and the example on 30s guide to instead make the practical part about choosing epsilon ( a measure of how much information you can give away as a result of one query on the database to a third malicious party) by tracking associated accuracy of result dataset compared to original. I hope I'll manage to edit the code to accept my text file after parsing it through into ndarray from txt, separating the last column to use as a target and going from there.
  • Differential Privacy project on Python
    1 project | /r/differentialprivacy | 22 Dec 2020
    IBM's Diffprivlib is a well-documented implementation of differential privacy in Python. Source code and getting started documentation is available on the IBM differential-privacy-library Github repository.

awesome-machine-unlearning

Posts with mentions or reviews of awesome-machine-unlearning. We have used some of these posts to build our list of alternatives and similar projects.
  • [P] [R] Machine Unlearning Summary
    1 project | /r/MachineLearning | 11 Jul 2023
    Github Repo: https://github.com/tamlhp/awesome-machine-unlearning 📚 Notebook: https://www.kaggle.com/code/tamlhp/machine-unlearning-the-right-to-be-forgotten/
  • [R] A Survey of Machine Unlearning
    1 project | /r/MachineLearning | 10 Jul 2023
    Today, computer systems hold large amounts of personal data. Yet while such an abundance of data allows breakthroughs in artificial intelligence, and especially machine learning (ML), its existence can be a threat to user privacy, and it can weaken the bonds of trust between humans and AI. Recent regulations now require that, on request, private information about a user must be removed from both computer systems and from ML models, i.e. ``the right to be forgotten''). While removing data from back-end databases should be straightforward, it is not sufficient in the AI context as ML models often `remember' the old data. Contemporary adversarial attacks on trained models have proven that we can learn whether an instance or an attribute belonged to the training data. This phenomenon calls for a new paradigm, namely machine unlearning, to make ML models forget about particular data. It turns out that recent works on machine unlearning have not been able to completely solve the problem due to the lack of common frameworks and resources. Therefore, this paper aspires to present a comprehensive examination of machine unlearning's concepts, scenarios, methods, and applications. Specifically, as a category collection of cutting-edge studies, the intention behind this article is to serve as a comprehensive resource for researchers and practitioners seeking an introduction to machine unlearning and its formulations, design criteria, removal requests, algorithms, and applications. In addition, we aim to highlight the key findings, current trends, and new research areas that have not yet featured the use of machine unlearning but could benefit greatly from it. We hope this survey serves as a valuable resource for ML researchers and those seeking to innovate privacy technologies. Our resources are publicly available at this https URL.
  • Welcome!
    1 project | /r/Machine_Unlearning | 25 Nov 2022
    Welcome to Machine unlearning, You can post all kinds of stuff about Machine unlearning here . Here is a great resource to get you started https://github.com/tamlhp/awesome-machine-unlearning
  • [P] [R] [D] Can Machine Actually Forget Your Data?
    1 project | /r/MachineLearning | 21 Nov 2022
    We also have a Github repo for this topic, please consider star if this topic piques your curiosity.
  • [P] Awesome Machine Unlearning
    1 project | /r/MachineLearning | 25 Oct 2022

What are some alternatives?

When comparing differential-privacy-library and awesome-machine-unlearning you can also consider the following projects:

data-science-ipython-notebooks - Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.

AIJack - Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)

PyDP - The Python Differential Privacy Library. Built on top of: https://github.com/google/differential-privacy

fides - The Privacy Engineering & Compliance Framework

continual-pretraining-nlp-vision - Code to reproduce experiments from the paper "Continual Pre-Training Mitigates Forgetting in Language and Vision" https://arxiv.org/abs/2205.09357

PrivacyEngCollabSpace - Privacy Engineering Collaboration Space

course-content-dl - NMA deep learning course

Keras - Deep Learning for humans

PyRedactKit - Python CLI tool to redact and un-redact sensitive data from text files. 🔐📝

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

keras - Deep Learning for humans [Moved to: https://github.com/keras-team/keras]