interpretable-ml-book
Book about interpretable machine learning (by christophM)
neural_regression_discontinuity
In this repository, I modify a quasi-experimental statistical procedure for time-series inference using convolutional long short-term memory networks. (by roccojhu)
interpretable-ml-book | neural_regression_discontinuity | |
---|---|---|
37 | 1 | |
4,827 | 6 | |
- | - | |
4.2 | 10.0 | |
7 days ago | almost 5 years ago | |
Jupyter Notebook | Jupyter Notebook | |
GNU General Public License v3.0 or later | - |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
interpretable-ml-book
Posts with mentions or reviews of interpretable-ml-book.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-02-18.
- Interpretable Machine Learning – A Guide for Making Black Box Models Explainable
- A Guide to Making Black Box Models Interpretable
-
So much for AI
If you're a student, I'd recommend this book :https://christophm.github.io/interpretable-ml-book/
-
Best way to make a random forest more explainable (need to know which features are driving the prediction)
Pretty much everyone shows SHAP plots now. Definitely the way to go. Check out the Christoph Molnar book. https://christophm.github.io/interpretable-ml-book/
-
Is there another way to determine the effect of the features other than the inbuilt features importance and SHAP values? [Research] [Discussion]
Yes, there are many techniques beyond the two you listed. I suggest doing a survey of techniques (hint: explainable AI or XAI), starting with the following book: Interpretable Machine Learning.
-
Which industry/profession/tasks require an aggregate analysis of data representing different physical objects (And how would you call that?)
Ah, alright. It sounds like you're looking for interpretability so I'd suggest this amazing overview of it by Christoph Molnar. If you choose the right models, or the right way of interpreting those, it can help a ton in communicating not only your results, but also what you did to obtain them.
-
What skills do I need to really work on?
Not necessarily; decision trees, Naive Bayes, etc., are interpretable. I'd refer to Molnar--specifically his Interpretable Machine Learning text--if you are interested in that subject.
-
Random forest vs multiple regression to determine predictor importance.
Consulting something like Interpretable Machine Learning or the documentation of a package like the vip package would also be a really, really good place to start.
-
The Rashomon Effect Explained — Does Truth Actually Exist? [13.46]
Just read a book called Interpretable Machine Learning which focuses on analyzing ML models and determine which inputs has more impact in the result.
- Interpretable Machine Learning
neural_regression_discontinuity
Posts with mentions or reviews of neural_regression_discontinuity.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-02-03.
-
[D] Resources for interpretable ML
Code for https://arxiv.org/abs/1811.10154 found: https://github.com/roccojhu/neural_regression_discontinuity
What are some alternatives?
When comparing interpretable-ml-book and neural_regression_discontinuity you can also consider the following projects:
stat_rethinking_2022 - Statistical Rethinking course winter 2022
shap - A game theoretic approach to explain the output of any machine learning model.
machine-learning-yearning - Machine Learning Yearning book by 🅰️𝓷𝓭𝓻𝓮𝔀 🆖
serve - ☁️ Build multimodal AI applications with cloud-native stack
random-forest-importances - Code to compute permutation and drop-column importances in Python scikit-learn models