imodels
ANN-decompiler
Our great sponsors
imodels | ANN-decompiler | |
---|---|---|
7 | 6 | |
1,290 | 20 | |
- | - | |
8.5 | 0.0 | |
5 days ago | over 2 years ago | |
Jupyter Notebook | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
imodels
-
[D] Have researchers given up on traditional machine learning methods?
- all domains requiring high interpretability absolutely ignore deep learning at all, and put all their research into traditional ML; see e.g. counterfactual examples, important interpretability methods in finance, or rule-based learning, important in medical or law applications
-
What would be my best approach given the data I have?
Next, this variable will be your target and you can use various supervised learning models to answer your question. Since interpretation is key, you can use something from here: https://github.com/csinva/imodels or do some black box models and use shab to understand which features contributed most.
-
Random Forest Estimation Question
Option 2) fit a model from https://github.com/csinva/imodels on the predicted values of the RF
-
UC Berkeley Researchers Introduce āimodels: A Python Package For Fitting Interpretable Machine Learning Models
Despite recent breakthroughs in the formulation and fitting of interpretable models, implementations are frequently challenging to locate, utilize, and compare. imodels solves this void by offering a single interface and implementation for a wide range of state-of-the-art interpretable modeling techniques, especially rule-based methods. imodels is basically a Python tool for predictive modeling that is simple, transparent, and accurate. It gives users a straightforward way to fit and use state-of-the-art interpretable models, all of which are compatible with scikit-learn (Pedregosa et al., 2011). These models can frequently replace black-box models while boosting interpretability and computing efficiency without compromising forecast accuracy. Continue Reading
-
[D] Looking for open source projects to contribute
Our package imodels is expanding our sklearn-compatible set of interpretable models and always looking for new contributors!
- imodels: a package extending sklearn with state-of-the-art models for interpretable data science (e.g. Bayesian Rule Lists, RuleFit)
- imodels: a package extending sklearn with state-of-the-art interpretable models (e.g. Bayesian Rule Lists, RuleFit) from BAIR [P]
ANN-decompiler
-
ChatGPT refuses to create a poem admiring Donald Trump but creates a poem and admires Joe Biden. ChatGPT is built in with political biases.
I can understand to why they would say that, as to what they refer as "woke" has massively invaded everything, but you are probably right that it was not done on purpose. Its not that easy to filter things like that out, especially not if humans have to do that manually. Also a friendly reminder that this is not real artificial intelligence, there is a good resource here https://github.com/Shamar/ANN-decompiler that explains this a bit better to why its mostly a magic trick, we have done models like that on paper in the 70's, not as large as those but its not exactly a new trick.
- āAIā Demystified: A Decompiler
-
āAIā demystified: a decompiler for āartificial neural networksā
> Does this somehow smuggle the training dataset back into the VM?
Turns out you were right about this: http://www.tesio.it/2021/09/01/a_decompiler_for_artificial_n...
Obviously I was not aware of this, so the whole decompilation process was a waste of computation time, but it doesn't prove nor disprove anything about the "model"'s relation with the source dataset.
-
[R] "AI" demystified: a decompiler
But I would be very happy to learn from you how to compute the whole source dataset from the output produced by compile.py without considering the "model".
What are some alternatives?
pycaret - An open-source, low-code machine learning library in Python
pycm - Multi-class confusion matrix library in Python
interpret - Fit interpretable models. Explain blackbox machine learning.
django-ai - Artificial Intelligence for Django
shap - A game theoretic approach to explain the output of any machine learning model.
typedb-ml - TypeDB-ML is the Machine Learning integrations library for TypeDB
linear-tree - A python library to build Model Trees with Linear Models at the leaves.
intelligent-trading-bot - Intelligent Trading Bot: Automatically generating signals and trading based on machine learning and feature engineering
docarray - Represent, send, store and search multimodal data
Activeloop Hub - Data Lake for Deep Learning. Build, manage, query, version, & visualize datasets. Stream data real-time to PyTorch/TensorFlow. https://activeloop.ai [Moved to: https://github.com/activeloopai/deeplake]
Mathematics-for-Machine-Learning-and-Data-Science-Specialization-Coursera - Mathematics for Machine Learning and Data Science Specialization - Coursera - deeplearning.ai - solutions and notes
0xDeCA10B - Sharing Updatable Models (SUM) on Blockchain