Our great sponsors
-
imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
- all domains requiring high interpretability absolutely ignore deep learning at all, and put all their research into traditional ML; see e.g. counterfactual examples, important interpretability methods in finance, or rule-based learning, important in medical or law applications
- all domains requiring high interpretability absolutely ignore deep learning at all, and put all their research into traditional ML; see e.g. counterfactual examples, important interpretability methods in finance, or rule-based learning, important in medical or law applications
Related posts
- [D] DL Practitioners, Do You Use Layer Visualization Tools s.a GradCam in Your Process?
- University of Tübingen Researchers Open-Source ‘CARLA’, A Python Library for Benchmarking Counterfactual Explanation Methods Across Data Sets and Machine Learning Models
- [Explainable AI] Interpret complex neural network's decisions with simple linear regressions
- Explainable AI: Interpreting black box models with simple linear regression
- Why Vector Compression Matters