athena
AIF360
athena | AIF360 | |
---|---|---|
1 | 6 | |
42 | 2,316 | |
- | 1.3% | |
0.0 | 7.2 | |
over 2 years ago | 16 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
athena
-
How to Put Out Democracy’s Dumpster Fire: Our democratic habits have been killed off by an internet kleptocracy that profits from disinformation, polarization, and rage. Here’s how to fix that.
While users could bookmark algorithms for use anywhere on reddit, the default sorting mode for a subreddit would be established by an ensemble of the algorithms, weighted by the usage of the those algorithms on that subreddit. Such a system could be robust against bot attacks, as an adversary must defeat not one algorithm, but the majority of algorithms used (see Athena: "A Framework for Defending Machine Learning Systems Against Adversarial Attacks").
AIF360
-
perspective off
o https://aif360.mybluemix.net/
- How to detect and tackle bias in my data?
-
Building a Responsible AI Solution - Principles into Practice
Besides the existing monitoring solution mentioned in the section above, we were also took inspiration from continuous integration and continuous delivery (CI/CD) testing tools like Jenkins and Circle CI, on the engineering front, and existing fairness libraries like Microsoft's Fairlearn and IMB's Fairness 360, on the machine learning side of things.
-
Hi Reddit! I'm Milena Pribic, Advisory Designer for AI and the global design representative for AI Ethics at IBM. Ask me anything about scaling ethical AI practices at a huge company!
My advice is to remember that bias comes into the process intentionally and unintentionally! Tools like AI Fairness 360 can help you mitigate that from a development/technical perspective: https://aif360.mybluemix.net/
- [R] What are some of the best research papers to look into for ML Bias
What are some alternatives?
fawkes - Fawkes, privacy preserving tool against facial recognition systems. More info at https://sandlab.cs.uchicago.edu/fawkes
fairlearn - A Python package to assess and improve fairness of machine learning models.
TextAttack - TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
faceswap - Deepfakes Software For All
AIX360 - Interpretability and explainability of data and machine learning models
interpret - Fit interpretable models. Explain blackbox machine learning.
thinc - 🔮 A refreshing functional take on deep learning, compatible with your favorite libraries
model-card-toolkit - A toolkit that streamlines and automates the generation of model cards
verifyml - Open-source toolkit to help companies implement responsible AI workflows.
clai - Command Line Artificial Intelligence or CLAI is an open-sourced project from IBM Research aimed to bring the power of AI to the command line interface.
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models