interpretable-ml-book
machine-learning-yearning
interpretable-ml-book | machine-learning-yearning | |
---|---|---|
37 | 3 | |
4,827 | 1,033 | |
- | - | |
4.2 | 10.0 | |
7 days ago | about 6 years ago | |
Jupyter Notebook | ||
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
interpretable-ml-book
- Interpretable Machine Learning – A Guide for Making Black Box Models Explainable
- A Guide to Making Black Box Models Interpretable
-
So much for AI
If you're a student, I'd recommend this book :https://christophm.github.io/interpretable-ml-book/
-
Best way to make a random forest more explainable (need to know which features are driving the prediction)
Pretty much everyone shows SHAP plots now. Definitely the way to go. Check out the Christoph Molnar book. https://christophm.github.io/interpretable-ml-book/
-
Is there another way to determine the effect of the features other than the inbuilt features importance and SHAP values? [Research] [Discussion]
Yes, there are many techniques beyond the two you listed. I suggest doing a survey of techniques (hint: explainable AI or XAI), starting with the following book: Interpretable Machine Learning.
-
Which industry/profession/tasks require an aggregate analysis of data representing different physical objects (And how would you call that?)
Ah, alright. It sounds like you're looking for interpretability so I'd suggest this amazing overview of it by Christoph Molnar. If you choose the right models, or the right way of interpreting those, it can help a ton in communicating not only your results, but also what you did to obtain them.
-
What skills do I need to really work on?
Not necessarily; decision trees, Naive Bayes, etc., are interpretable. I'd refer to Molnar--specifically his Interpretable Machine Learning text--if you are interested in that subject.
-
Random forest vs multiple regression to determine predictor importance.
Consulting something like Interpretable Machine Learning or the documentation of a package like the vip package would also be a really, really good place to start.
-
The Rashomon Effect Explained — Does Truth Actually Exist? [13.46]
Just read a book called Interpretable Machine Learning which focuses on analyzing ML models and determine which inputs has more impact in the result.
- Interpretable Machine Learning
machine-learning-yearning
-
Summary text on applied ML principles
There's Machine Learning - Yearning by Andrew Ng, it's free on Github: https://github.com/ajaymache/machine-learning-yearning
-
✨ 10 Free Books for Machine Learning & Data Science 📚
🔗 https://github.com/ajaymache/machine-learning-yearning
-
ML Books You Need to Read
Machine Learning Yearning
What are some alternatives?
stat_rethinking_2022 - Statistical Rethinking course winter 2022
r4ds - R for data science: a book
shap - A game theoretic approach to explain the output of any machine learning model.
Probabilistic-Programming-and-Bayesian-Methods-for-Hackers - aka "Bayesian Methods for Hackers": An introduction to Bayesian methods + probabilistic programming with a computation/understanding-first, mathematics-second point of view. All in pure Python ;)
neural_regression_discontinuity - In this repository, I modify a quasi-experimental statistical procedure for time-series inference using convolutional long short-term memory networks.
serve - ☁️ Build multimodal AI applications with cloud-native stack
random-forest-importances - Code to compute permutation and drop-column importances in Python scikit-learn models