gorse
CloudForest
Our great sponsors
gorse | CloudForest | |
---|---|---|
8 | 4 | |
7,980 | 735 | |
1.4% | - | |
7.1 | 0.0 | |
3 days ago | about 2 years ago | |
Go | Go | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gorse
-
How to Use AVX512 in Golang
Thanks, @gnabgib. Your comment is very insightful and reminds me of my mentor correcting my academic paper. The post introduces the basic idea of using AVX512 in Go by writing C codes. There are mistakes and many details are omitted. A complete example is https://github.com/gorse-io/gorse/tree/master/base/floats
- [P] Gorse: An open-source recommender system service
CloudForest
-
Trinary Decision Trees for missing value handling
I implemented something like this in a [pre xgboost boosting framework](https://github.com/ryanbressler/CloudForest) ~10 years ago and it worked well.
It isn't even that much of a speed hit using the classical sorting CART implementation. However xgboost and ligthgbm use histogram based approximate sorting which might be harder to adapt in a performant way. And certainly the code will be a lot messier.
I've got a ~10 year old implementation that does something similar calling it "three way splitting" here: https://github.com/ryanbressler/CloudForest
And i got the idea from a lab mate, Timo Erkkila's RF-ACE project though neither of us thought it was a particularly novel idea.
-
[D] Best methods for imbalanced multi-class classification with high dimensional, sparse predictors
The best method i've seen for dealing with this bias is to create "artificial contrasts" by including possibly many permutated copies of each feature and then doing a statistical test of the random forest importance values for each feature vs its shuffled contrasts. This method is described here: https://www.jmlr.org/papers/volume10/tuv09a/tuv09a.pdf and there is an implementation here: https://github.com/ryanbressler/CloudForest
What are some alternatives?
Gorgonia - Gorgonia is a library that helps facilitate machine learning in Go.
m2cgen - Transform ML models into a native code (Java, C, Python, Go, JavaScript, Visual Basic, C#, R, PowerShell, PHP, Dart, Haskell, Ruby, F#, Rust) with zero dependencies
libsvm - libsvm go version
gago - :four_leaf_clover: Evolutionary optimization library for Go (genetic algorithm, partical swarm optimization, differential evolution)
gosseract - Go package for OCR (Optical Character Recognition), by using Tesseract C++ library
sklearn - bits of sklearn ported to Go #golang
GoLearn - Machine Learning for Go
gobrain - Neural Networks written in go
shield - Bayesian text classifier with flexible tokenizers and storage backends for Go
go-galib - Genetic Algorithms library written in Go / golang
go-cluster - k-modes and k-prototypes clustering algorithms implementation in Go