MachineLearning-QandAI-book
augmented-interpretable-models
MachineLearning-QandAI-book | augmented-interpretable-models | |
---|---|---|
3 | 1 | |
243 | 37 | |
- | - | |
6.4 | 7.4 | |
about 1 month ago | about 1 month ago | |
Jupyter Notebook | Jupyter Notebook | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MachineLearning-QandAI-book
- Machine Learning and AI Beyond the Basics Book
- Machine Learning Q and AI (Book by Sebastian Raschka)
-
New book explaining more advanced concepts in machine learning, deep learning, and AI
A little note though: it's not a coding book; it's more focused on concepts. I do have supplementary materials with code examples for some chapters where it makes sense, though: https://github.com/rasbt/MachineLearning-QandAI-book
augmented-interpretable-models
-
[R] Emb-GAM: an Interpretable and Efficient Predictor using Pre-trained Language Models
Deep learning models have achieved impressive prediction performance but often sacrifice interpretability, a critical consideration in high-stakes domains such as healthcare or policymaking. In contrast, generalized additive models (GAMs) can maintain interpretability but often suffer from poor prediction performance due to their inability to effectively capture feature interactions. In this work, we aim to bridge this gap by using pre-trained neural language models to extract embeddings for each input before learning a linear model in the embedding space. The final model (which we call Emb-GAM) is a transparent, linear function of its input features and feature interactions. Leveraging the language model allows Emb-GAM to learn far fewer linear coefficients, model larger interactions, and generalize well to novel inputs (e.g. unseen ngrams in text). Across a variety of NLP datasets, Emb-GAM achieves strong prediction performance without sacrificing interpretability. All code is made available on Github.
What are some alternatives?
Deep-Learning-With-TensorFlow - All the resources and hands-on exercises for you to get started with Deep Learning in TensorFlow
language-planner - Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"
shap - A game theoretic approach to explain the output of any machine learning model. [Moved to: https://github.com/shap/shap]
scikit-learn-ts - Powerful machine learning library for Node.js – uses Python's scikit-learn under the hood.
DeepLearning - Contains all my works, references for deep learning
handson-ml - ⛔️ DEPRECATED – See https://github.com/ageron/handson-ml3 instead.
gan-vae-pretrained-pytorch - Pretrained GANs + VAEs + classifiers for MNIST/CIFAR in pytorch.
AutoCog - Automaton & Cognition
imodels - Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
align-transformers - This is an old library. Try pyvene instead!