mljar-supervised VS mljar-examples

Compare mljar-supervised vs mljar-examples and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
mljar-supervised mljar-examples
51 2
2,929 58
1.2% -
8.5 3.3
11 days ago 5 months ago
Python Jupyter Notebook
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

mljar-supervised

Posts with mentions or reviews of mljar-supervised. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-24.

mljar-examples

Posts with mentions or reviews of mljar-examples. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-01-05.
  • MLJAR Automated Machine Learning for Tabular Data (Stacking, Golden Features, Explanations, and AutoDoc)
    3 projects | /r/learnmachinelearning | 5 Jan 2021
    All ML experiments have automatic documentation that creates Markdown reports ready to commit to the repo (example1, example2).
  • Show HN: Mljar Automated Machine Learning for Tabular Data (Explanation,AutoDoc)
    3 projects | news.ycombinator.com | 5 Jan 2021
    The creator here. I'm working on AutoML since 2016. I think that latest release (0.7.15) of MLJAR AutoML is amazing. It has ton of fantastic features that I always want to have in AutoML:

    - Operates in three modes: Explain, Perform, Compete.

    - `Explain` is for data exploratory and checking the default performance (without HP tuning). It has Automatic Exploratory Data Analysis.

    - `Perform` is for building production-ready models (HP tuning + ensembling).

    - `Compete` is for solving ML competitions in limited time amount (HP tuning + ensembling + stacking).

    - All ML experiments have automatic documentation which creates Markdown reports ready to commit to the repo ([example](https://github.com/mljar/mljar-examples/tree/master/Income_c...)).

    - The package produces extensive explanations: decision tree visualization, feature importance, SHAP explanations, advanced metrics values.

    - It has advanced feature engineering, like: Golden Features, Features Selection, Time and Text Transformations, Categoricals handling with target, label, or one-hot encodings.

What are some alternatives?

When comparing mljar-supervised and mljar-examples you can also consider the following projects:

optuna - A hyperparameter optimization framework

igel - a delightful machine learning tool that allows you to train, test, and use models without writing code

autokeras - AutoML library for deep learning

automlbenchmark - OpenML AutoML Benchmarking Framework

LightGBM - A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

humble-benchmarks - Benchmarking programming languages using statistics and machine learning algorithms

PySR - High-Performance Symbolic Regression in Python and Julia

AutoViz - Automatically Visualize any dataset, any size with a single line of code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.

Auto_ViML - Automatically Build Multiple ML Models with a Single Line of Code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.

studio - MLJAR Studio Desktop Application

xgboost_ray - Distributed XGBoost on Ray