- augmented-interpretable-models VS AutoCog
- augmented-interpretable-models VS shap
- augmented-interpretable-models VS scikit-learn-ts
- augmented-interpretable-models VS ai_story_scale
- augmented-interpretable-models VS align-transformers
- augmented-interpretable-models VS DeepLearning
- augmented-interpretable-models VS language-planner
- augmented-interpretable-models VS HybridAGI
- augmented-interpretable-models VS imodels
- augmented-interpretable-models VS gan-vae-pretrained-pytorch
Augmented-interpretable-models Alternatives
Similar projects and alternatives to augmented-interpretable-models
-
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
shap
Discontinued A game theoretic approach to explain the output of any machine learning model. [Moved to: https://github.com/shap/shap] (by slundberg)
-
scikit-learn-ts
Powerful machine learning library for Node.js – uses Python's scikit-learn under the hood.
-
ai_story_scale
The AI story scale (AISS): A human rating scale for texts written with generative language models.
-
-
-
language-planner
Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
HybridAGI
The Programmable Cypher-based Neuro-Symbolic AGI that lets you program its behavior using Graph-based Prompt Programming: for people who want AI to behave as expected
-
imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
-
augmented-interpretable-models discussion
augmented-interpretable-models reviews and mentions
-
[R] Emb-GAM: an Interpretable and Efficient Predictor using Pre-trained Language Models
Deep learning models have achieved impressive prediction performance but often sacrifice interpretability, a critical consideration in high-stakes domains such as healthcare or policymaking. In contrast, generalized additive models (GAMs) can maintain interpretability but often suffer from poor prediction performance due to their inability to effectively capture feature interactions. In this work, we aim to bridge this gap by using pre-trained neural language models to extract embeddings for each input before learning a linear model in the embedding space. The final model (which we call Emb-GAM) is a transparent, linear function of its input features and feature interactions. Leveraging the language model allows Emb-GAM to learn far fewer linear coefficients, model larger interactions, and generalize well to novel inputs (e.g. unseen ngrams in text). Across a variety of NLP datasets, Emb-GAM achieves strong prediction performance without sacrificing interpretability. All code is made available on Github.
Stats
microsoft/augmented-interpretable-models is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of augmented-interpretable-models is Jupyter Notebook.