alibi
DALEX
alibi | DALEX | |
---|---|---|
4 | 2 | |
2,293 | 1,326 | |
0.7% | 0.8% | |
7.7 | 5.9 | |
7 days ago | 3 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
alibi
- Alibi: Open-source Python lib for ML model inspection and interpretation
-
Ask HN: Who is hiring? (January 2022)
Seldon | Multiple positions | London/Cambridge UK | Onsite/Remote | Full time | seldon.io
At Seldon we are building industry leading solutions for deploying, monitoring, and explaining machine learning models. We are an open-core company with several successful open source projects like:
* https://github.com/SeldonIO/seldon-core
* https://github.com/SeldonIO/mlserver
* https://github.com/SeldonIO/alibi
* https://github.com/SeldonIO/alibi-detect
* https://github.com/SeldonIO/tempo
We are hiring for a range of positions, including software engineers(go, k8s), ml engineers (python, go), frontend engineers (js), UX designer, and product managers. All open positions can be found at https://www.seldon.io/careers/
- Ask HN: Who is hiring? (December 2021)
-
Best alternatives to 'shap' package?
Alibi explain might be an option depending on what you are looking for https://github.com/SeldonIO/alibi
DALEX
-
Twitter set to accept ‘best and final offer’ of Elon Musk
Which he will not do, because: a) He can't, it's a black box algorithm. It actually is open source already, but that doesn't mean much as it's useless without Twitter's data https://github.com/ModelOriented/DALEX b) He won't release data that shows the algorithm is racist and amplifies conservative and extremist content. He won't remove such functions because it will cost him billions.
-
[D] What are your favorite Random Forest implementations that support categoricals
There are a couple of ways to use Shapley values for explanations in R. One way is to use DALEX, which also contains a lot of other methods besides SHAP. Another one is iml. I am sure there are several other implementations of SHAP as well.
What are some alternatives?
interpret - Fit interpretable models. Explain blackbox machine learning.
shapley - The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
captum - Model interpretability and understanding for PyTorch
CARLA - CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Lime-For-Time - Application of the LIME algorithm by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin to the domain of time series classification
conductor - Conductor is a microservices orchestration engine.
responsible-ai-toolbox - Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
MLServer - An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more
LIME - Tutorial notebooks on explainable Machine Learning with LIME (Original work: https://arxiv.org/abs/1602.04938)
causallift - CausalLift: Python package for causality-based Uplift Modeling in real-world business
catboost - A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.