mljar-examples
modAL
mljar-examples | modAL | |
---|---|---|
2 | 4 | |
58 | 2,143 | |
- | 0.8% | |
3.3 | 1.9 | |
5 months ago | 2 months ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mljar-examples
-
MLJAR Automated Machine Learning for Tabular Data (Stacking, Golden Features, Explanations, and AutoDoc)
All ML experiments have automatic documentation that creates Markdown reports ready to commit to the repo (example1, example2).
-
Show HN: Mljar Automated Machine Learning for Tabular Data (Explanation,AutoDoc)
The creator here. I'm working on AutoML since 2016. I think that latest release (0.7.15) of MLJAR AutoML is amazing. It has ton of fantastic features that I always want to have in AutoML:
- Operates in three modes: Explain, Perform, Compete.
- `Explain` is for data exploratory and checking the default performance (without HP tuning). It has Automatic Exploratory Data Analysis.
- `Perform` is for building production-ready models (HP tuning + ensembling).
- `Compete` is for solving ML competitions in limited time amount (HP tuning + ensembling + stacking).
- All ML experiments have automatic documentation which creates Markdown reports ready to commit to the repo ([example](https://github.com/mljar/mljar-examples/tree/master/Income_c...)).
- The package produces extensive explanations: decision tree visualization, feature importance, SHAP explanations, advanced metrics values.
- It has advanced feature engineering, like: Golden Features, Features Selection, Time and Text Transformations, Categoricals handling with target, label, or one-hot encodings.
modAL
-
modAL VS encord-active - a user suggested alternative
2 projects | 12 Apr 2023
- What are frameworks/tools used for Human-In-The-Loop (active) learning ?
-
Launch HN: Lightly (YC S21): Label only the data which improves your ML model
How does it differentiate from modAL?
https://github.com/modAL-python/modAL
- Active Learning Using Detectron2
What are some alternatives?
mljar-supervised - Python package for AutoML on Tabular Data with Feature Engineering, Hyper-Parameters Tuning, Explanations and Automatic Documentation
active_learning - Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training with torch's DDP.
igel - a delightful machine learning tool that allows you to train, test, and use models without writing code
GPflowOpt - Bayesian Optimization using GPflow
automlbenchmark - OpenML AutoML Benchmarking Framework
paramonte - ParaMonte: Parallel Monte Carlo and Machine Learning Library for Python, MATLAB, Fortran, C++, C.
humble-benchmarks - Benchmarking programming languages using statistics and machine learning algorithms
lightly - A python library for self-supervised learning on images.
pretty-print-confusion-matrix - Confusion Matrix in Python: plot a pretty confusion matrix (like Matlab) in python using seaborn and matplotlib
baybe - Bayesian Optimization and Design of Experiments
DataProfiler - What's in your data? Extract schema, statistics and entities from datasets
Encord Active - Open source active learning toolkit to find failure modes in your computer vision models, prioritize data to label next, and drive data curation to improve model performance.