mljar-examples
Python_NN
mljar-examples | Python_NN | |
---|---|---|
2 | 1 | |
58 | 0 | |
- | - | |
3.3 | 0.0 | |
5 months ago | almost 2 years ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mljar-examples
-
MLJAR Automated Machine Learning for Tabular Data (Stacking, Golden Features, Explanations, and AutoDoc)
All ML experiments have automatic documentation that creates Markdown reports ready to commit to the repo (example1, example2).
-
Show HN: Mljar Automated Machine Learning for Tabular Data (Explanation,AutoDoc)
The creator here. I'm working on AutoML since 2016. I think that latest release (0.7.15) of MLJAR AutoML is amazing. It has ton of fantastic features that I always want to have in AutoML:
- Operates in three modes: Explain, Perform, Compete.
- `Explain` is for data exploratory and checking the default performance (without HP tuning). It has Automatic Exploratory Data Analysis.
- `Perform` is for building production-ready models (HP tuning + ensembling).
- `Compete` is for solving ML competitions in limited time amount (HP tuning + ensembling + stacking).
- All ML experiments have automatic documentation which creates Markdown reports ready to commit to the repo ([example](https://github.com/mljar/mljar-examples/tree/master/Income_c...)).
- The package produces extensive explanations: decision tree visualization, feature importance, SHAP explanations, advanced metrics values.
- It has advanced feature engineering, like: Golden Features, Features Selection, Time and Text Transformations, Categoricals handling with target, label, or one-hot encodings.
Python_NN
-
Difficulty in using LSTMs for text generation
About the issue with repeating characters, it is supposed to happen. There are certain loops that repeat. To solve this you must remove the line q1 = np.argmax(p1.cpu(), axis=1)[-1].item() and instead, do sampling based on the softmax probabilities. Check this code where I perform sampling. The vec = vec**(2) is used to decrease randomness you can see how your model works and either keep it or leave it.
What are some alternatives?
mljar-supervised - Python package for AutoML on Tabular Data with Feature Engineering, Hyper-Parameters Tuning, Explanations and Automatic Documentation
Deep-Learning-Computer-Vision - My assignment solutions for Stanford’s CS231n (CNNs for Visual Recognition) and Michigan’s EECS 498-007/598-005 (Deep Learning for Computer Vision), version 2020.
igel - a delightful machine learning tool that allows you to train, test, and use models without writing code
automlbenchmark - OpenML AutoML Benchmarking Framework
humble-benchmarks - Benchmarking programming languages using statistics and machine learning algorithms