Our great sponsors
- InfluxDB - Access the most powerful time series database as a service
- Sonar - Write Clean C++ Code. Always.
- ONLYOFFICE ONLYOFFICE Docs — document collaboration in your environment
|1 day ago||2 days ago|
|Apache License 2.0||Apache License 2.0|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
PSA: You don't need fancy stuff to do good work.
10 projects | reddit.com/r/datascience | 9 May 2023
Finally, when it comes to building models and making predictions, Python and R have a plethora of options available. Libraries like scikit-learn, statsmodels, and TensorFlowin Python, or caret, randomForest, and xgboostin R, provide powerful machine learning algorithms and statistical models that can be applied to a wide range of problems. What's more, these libraries are open-source and have extensive documentation and community support, making it easy to learn and apply new techniques without needing specialized training or expensive software licenses.
xgboost VS CXXGraph - a user suggested alternative
2 projects | 28 Feb 2022
2 projects | dev.to | 5 Jul 2022
What's New with AWS: Amazon SageMaker built-in algorithms now provides four new Tabular Data Modeling Algorithms
3 projects | dev.to | 28 Jun 2022
CatBoost is another popular and high-performance open-source implementation of the Gradient Boosting Decision Tree (GBDT). To learn how to use this algorithm, please see example notebooks for Classification and Regression.
Writing the fastest GBDT libary in Rust
6 projects | dev.to | 11 Jan 2022
Here are our benchmarks on training time comparing Tangram's Gradient Boosted Decision Tree Library to LightGBM, XGBoost, CatBoost, and sklearn.
Data Science toolset summary from 2021
13 projects | dev.to | 13 Nov 2021
Catboost - CatBoost is an open-source software library developed by Yandex. It provides a gradient boosting framework which attempts to solve for Categorical features using a permutation driven alternative compared to the classical algorithm. Link - https://catboost.ai/
CatBoost Quickstart — ML Classification
2 projects | dev.to | 15 Mar 2021
CatBoost is an open source algorithm based on gradient boosted decision trees. It supports numerical, categorical and text features. Check out the docs.
[D] What are your favorite Random Forest implementations that support categoricals
2 projects | reddit.com/r/MachineLearning | 20 Feb 2021
If you considering GBDT check out catboost, unfortunately RF mode is not available but library implement lots of interesting categorical encoding tricks that boost accuracy.
CatBoost and Water Pumps
2 projects | dev.to | 20 Feb 2021
The data contains a large number of categorical features. The most suitable for obtaining a base-line model, in my opinion, is CatBoost. It is a high-performance, open-source library for gradient boosting on decision trees.
What are some alternatives?
Prophet - Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.
MLP Classifier - A handwritten multilayer perceptron classifer using numpy.
tensorflow - An Open Source Machine Learning Framework for Everyone
Keras - Deep Learning for humans
mlpack - mlpack: a fast, header-only C++ machine learning library
LightFM - A Python implementation of LightFM, a hybrid recommendation algorithm.
Recommender - A C library for product recommendations/suggestions using collaborative filtering (CF)
scikit-learn - scikit-learn: machine learning in Python
Simple GAN - Attempt at implementation of a simple GAN using Keras
MLflow - Open source platform for the machine learning lifecycle
H2O - H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.