sent_debias
[ACL 2020] Towards Debiasing Sentence Representations (by pliang279)
fairlearn
A Python package to assess and improve fairness of machine learning models. (by fairlearn)
sent_debias | fairlearn | |
---|---|---|
1 | 6 | |
55 | 1,806 | |
- | 1.7% | |
0.0 | 8.0 | |
over 1 year ago | 6 days ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sent_debias
Posts with mentions or reviews of sent_debias.
We have used some of these posts to build our list of alternatives
and similar projects.
-
academic ethics issues in NLP
following on from the above, to what extent should we trust big models and the built in biases that they learn from huge scraped datasets? Many current SOTA trends for doing few shot learning on nlp tasks involve fine tuning existing large language models. There are lots of interesting research is going on around understanding and removing these biases like this paper from Liang and Li @ ACL2020. A related point is explainability - again some interesting work going on around things like rationale generation this now somewhat old paper by Lei et al 2016 gives some good context
fairlearn
Posts with mentions or reviews of fairlearn.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-28.
- Fairlearn
-
Open source projects to work on AI bias
I'm involved in the Fairlearn project, and we always love getting new contributors. We have a bunch of open issues, ranging from new functionality to writing documentation, so feel free to take a look and see if there is something you would like to work on.
-
In your experience, are AI Ethics teams valuable/effective? [D]
I'm involved with the Fairlearn project, so once I figure out what's necessary from a company policy-side, my plan is to incorporate these methods into Fairlearn one day.
-
Building a Responsible AI Solution - Principles into Practice
Besides the existing monitoring solution mentioned in the section above, we were also took inspiration from continuous integration and continuous delivery (CI/CD) testing tools like Jenkins and Circle CI, on the engineering front, and existing fairness libraries like Microsoft's Fairlearn and IMB's Fairness 360, on the machine learning side of things.
-
Ideas on how to use my data skills for a good cause?
Another commenter mentioned contributing to open-source tools. If you're interested in going that route, I'm involved in the Fairlearn project, and we could always benefit from a good data engineer.
What are some alternatives?
When comparing sent_debias and fairlearn you can also consider the following projects:
AIF360 - A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
verifyml - Open-source toolkit to help companies implement responsible AI workflows.
model-card-toolkit - A toolkit that streamlines and automates the generation of model cards
Jenkins - Jenkins automation server
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
EthicML - Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency