imodels
transformers
Our great sponsors
imodels | transformers | |
---|---|---|
7 | 175 | |
1,290 | 125,021 | |
- | 3.1% | |
8.5 | 10.0 | |
5 days ago | 1 day ago | |
Jupyter Notebook | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
imodels
-
[D] Have researchers given up on traditional machine learning methods?
- all domains requiring high interpretability absolutely ignore deep learning at all, and put all their research into traditional ML; see e.g. counterfactual examples, important interpretability methods in finance, or rule-based learning, important in medical or law applications
-
What would be my best approach given the data I have?
Next, this variable will be your target and you can use various supervised learning models to answer your question. Since interpretation is key, you can use something from here: https://github.com/csinva/imodels or do some black box models and use shab to understand which features contributed most.
-
Random Forest Estimation Question
Option 2) fit a model from https://github.com/csinva/imodels on the predicted values of the RF
-
UC Berkeley Researchers Introduce ‘imodels: A Python Package For Fitting Interpretable Machine Learning Models
Despite recent breakthroughs in the formulation and fitting of interpretable models, implementations are frequently challenging to locate, utilize, and compare. imodels solves this void by offering a single interface and implementation for a wide range of state-of-the-art interpretable modeling techniques, especially rule-based methods. imodels is basically a Python tool for predictive modeling that is simple, transparent, and accurate. It gives users a straightforward way to fit and use state-of-the-art interpretable models, all of which are compatible with scikit-learn (Pedregosa et al., 2011). These models can frequently replace black-box models while boosting interpretability and computing efficiency without compromising forecast accuracy. Continue Reading
-
[D] Looking for open source projects to contribute
Our package imodels is expanding our sklearn-compatible set of interpretable models and always looking for new contributors!
- imodels: a package extending sklearn with state-of-the-art models for interpretable data science (e.g. Bayesian Rule Lists, RuleFit)
- imodels: a package extending sklearn with state-of-the-art interpretable models (e.g. Bayesian Rule Lists, RuleFit) from BAIR [P]
transformers
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you want💡
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
What are some alternatives?
pycaret - An open-source, low-code machine learning library in Python
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
interpret - Fit interpretable models. Explain blackbox machine learning.
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
shap - A game theoretic approach to explain the output of any machine learning model.
llama - Inference code for Llama models
linear-tree - A python library to build Model Trees with Linear Models at the leaves.
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
docarray - Represent, send, store and search multimodal data
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Mathematics-for-Machine-Learning-and-Data-Science-Specialization-Coursera - Mathematics for Machine Learning and Data Science Specialization - Coursera - deeplearning.ai - solutions and notes
huggingface_hub - The official Python client for the Huggingface Hub.