|4 days ago||5 days ago|
|Apache License 2.0||Apache License 2.0|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CS Internship Questions
1 project | reddit.com/r/stanford | 7 May 2022
By the way, most of the time XGBoost works just as well for projects, would not recommend applying deep learning to every single problem you come across, it's something Stanford CS really likes to showcase when it's well known (1) that sometimes "smaller"/less complex models can perform just as well or have their own interpretive advantages and (2) it is well known within ML and DS communities that deep learning does not perform as well with tabular datasets and using deep learning as a default to every problem is just poor practice. However, if you do (god forbid) get language, speech/audio, vision/imaging, or even time series models then deep learning as a baseline is not the worst idea.
OOM with ML Models (SKlearn, XGBoost, etc), workaround/tips for large datasets?
1 project | reddit.com/r/MLQuestions | 1 Mar 2022
xgboost VS CXXGraph - a user suggested alternative
2 projects | 28 Feb 2022
'y contains previously unseen labels' (label encoder)
1 project | reddit.com/r/pythonhelp | 9 Dec 2021
Writing the fastest GBDT libary in Rust
6 projects | dev.to | 11 Jan 2022
Here are our benchmarks on training time comparing Tangram's Gradient Boosted Decision Tree Library to LightGBM, XGBoost, CatBoost, and sklearn.
Data Science toolset summary from 2021
13 projects | dev.to | 13 Nov 2021
Catboost - CatBoost is an open-source software library developed by Yandex. It provides a gradient boosting framework which attempts to solve for Categorical features using a permutation driven alternative compared to the classical algorithm. Link - https://catboost.ai/
CatBoost Quickstart — ML Classification
2 projects | dev.to | 15 Mar 2021
CatBoost is an open source algorithm based on gradient boosted decision trees. It supports numerical, categorical and text features. Check out the docs.
[D] What are your favorite Random Forest implementations that support categoricals
2 projects | reddit.com/r/MachineLearning | 20 Feb 2021
If you considering GBDT check out catboost, unfortunately RF mode is not available but library implement lots of interesting categorical encoding tricks that boost accuracy.
CatBoost and Water Pumps
2 projects | dev.to | 20 Feb 2021
The data contains a large number of categorical features. The most suitable for obtaining a base-line model, in my opinion, is CatBoost. It is a high-performance, open-source library for gradient boosting on decision trees.
What are some alternatives?
Prophet - Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.
MLP Classifier - A handwritten multilayer perceptron classifer using numpy.
tensorflow - An Open Source Machine Learning Framework for Everyone
Keras - Deep Learning for humans
mlpack - mlpack: a scalable C++ machine learning library --
Recommender - A C library for product recommendations/suggestions using collaborative filtering (CF)
LightFM - A Python implementation of LightFM, a hybrid recommendation algorithm.
scikit-learn - scikit-learn: machine learning in Python
MLflow - Open source platform for the machine learning lifecycle
H2O - H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.
vowpal_wabbit - Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.