sent_debias
AIF360
sent_debias | AIF360 | |
---|---|---|
1 | 6 | |
55 | 2,328 | |
- | 1.8% | |
0.0 | 7.2 | |
over 1 year ago | 16 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sent_debias
-
academic ethics issues in NLP
following on from the above, to what extent should we trust big models and the built in biases that they learn from huge scraped datasets? Many current SOTA trends for doing few shot learning on nlp tasks involve fine tuning existing large language models. There are lots of interesting research is going on around understanding and removing these biases like this paper from Liang and Li @ ACL2020. A related point is explainability - again some interesting work going on around things like rationale generation this now somewhat old paper by Lei et al 2016 gives some good context
AIF360
-
perspective off
o https://aif360.mybluemix.net/
- How to detect and tackle bias in my data?
-
Building a Responsible AI Solution - Principles into Practice
Besides the existing monitoring solution mentioned in the section above, we were also took inspiration from continuous integration and continuous delivery (CI/CD) testing tools like Jenkins and Circle CI, on the engineering front, and existing fairness libraries like Microsoft's Fairlearn and IMB's Fairness 360, on the machine learning side of things.
-
Hi Reddit! I'm Milena Pribic, Advisory Designer for AI and the global design representative for AI Ethics at IBM. Ask me anything about scaling ethical AI practices at a huge company!
My advice is to remember that bias comes into the process intentionally and unintentionally! Tools like AI Fairness 360 can help you mitigate that from a development/technical perspective: https://aif360.mybluemix.net/
- [R] What are some of the best research papers to look into for ML Bias
What are some alternatives?
fairlearn - A Python package to assess and improve fairness of machine learning models.
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
AIX360 - Interpretability and explainability of data and machine learning models
interpret - Fit interpretable models. Explain blackbox machine learning.
thinc - 🔮 A refreshing functional take on deep learning, compatible with your favorite libraries
model-card-toolkit - A toolkit that streamlines and automates the generation of model cards
verifyml - Open-source toolkit to help companies implement responsible AI workflows.
clai - Command Line Artificial Intelligence or CLAI is an open-sourced project from IBM Research aimed to bring the power of AI to the command line interface.
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
Jenkins - Jenkins automation server
EthicML - Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency
fairness - It's about Fairness. Supporting the CULT Family https://github.com/orgs/cultfamily-on-github/repositories