randomforest
CloudForest
randomforest | CloudForest | |
---|---|---|
2 | 4 | |
39 | 735 | |
- | - | |
2.6 | 0.0 | |
2 months ago | about 2 years ago | |
Go | Go | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
randomforest
-
Machine Learning
I did end up writing and using a custom library for Random Forest (it's also in AwesomGo) in one real-world project (detecting Alzheimer's and Parkinson's from speech from a mobile app) - https://github.com/malaschitz/randomForest I had better results than the team who used TensorFlow and most importantly I didn't have to use any other technology than Go. For NN's it's probably best to use https://gorgonia.org/ - but it's not exactly a user friendly library. But there is a whole book on it - Hands-On Deep Learning with Go.
- Boruta algorithm added to Random Forest library
CloudForest
-
Trinary Decision Trees for missing value handling
I implemented something like this in a [pre xgboost boosting framework](https://github.com/ryanbressler/CloudForest) ~10 years ago and it worked well.
It isn't even that much of a speed hit using the classical sorting CART implementation. However xgboost and ligthgbm use histogram based approximate sorting which might be harder to adapt in a performant way. And certainly the code will be a lot messier.
-
Future of Golang
Personally, my Go-to ML tool for tabular data is here: https://github.com/ryanbressler/CloudForest
-
[D] Best methods for imbalanced multi-class classification with high dimensional, sparse predictors
The best method i've seen for dealing with this bias is to create "artificial contrasts" by including possibly many permutated copies of each feature and then doing a statistical test of the random forest importance values for each feature vs its shuffled contrasts. This method is described here: https://www.jmlr.org/papers/volume10/tuv09a/tuv09a.pdf and there is an implementation here: https://github.com/ryanbressler/CloudForest
What are some alternatives?
GoLearn - Machine Learning for Go
libsvm - libsvm go version
m2cgen - Transform ML models into a native code (Java, C, Python, Go, JavaScript, Visual Basic, C#, R, PowerShell, PHP, Dart, Haskell, Ruby, F#, Rust) with zero dependencies
gago - :four_leaf_clover: Evolutionary optimization library for Go (genetic algorithm, partical swarm optimization, differential evolution)
sklearn - bits of sklearn ported to Go #golang
gobrain - Neural Networks written in go
goml - On-line Machine Learning in Go (and so much more)
go-galib - Genetic Algorithms library written in Go / golang
EAGO
shield - Bayesian text classifier with flexible tokenizers and storage backends for Go
onnx-go - onnx-go gives the ability to import a pre-trained neural network within Go without being linked to a framework or library.
goga - Golang Genetic Algorithm