vowpal_wabbit
mxnet
DISCONTINUED
Our great sponsors
vowpal_wabbit | mxnet | |
---|---|---|
11 | 4 | |
8,394 | 20,644 | |
0.4% | - | |
8.3 | 4.1 | |
7 days ago | 5 months ago | |
C++ | C++ | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vowpal_wabbit
-
Microsoft Reinforcement Learning Open Source Fest 2022 – Native CSV Parser
My project here at the Reinforcement Learning Open Source Fest 2022 is to add the native CSV parsing feature for the Vowpal Wabbit.
-
Predicting numerical values to a very high accuracy
If you only have 198 possible values then extreme multiclass models might benefit here with better precision and faster convergence. For example probabilistic label trees might have some relevance. Vowpal Wabbit also has specific reductions for extreme multi class problems. Might be worth a try if other alternatives still don't work out.
-
Performance comparison: counting words in Python, Go, C++, C, AWK, Forth, and Rust
You're likely correct, but I do recall attending a lecture by John Langford of https://vowpalwabbit.org/ running some form of an N-gram C++ based NLP model, including summary statistics on performance, in less time than wc -l took on the same data. Must have some neat hashing tricks, but still was cool
mxnet
-
List of AI-Models
Click to Learn more...
-
Introduction to deep learning hardware in the cloud
Build – Choose a machine learning framework (such as TensorFlow, PyTorch, Apache MXNet, etc.)
-
just released my Clojure AI book
Clojure and Python also have bindings to the Apache MXNet library. Is there a reason why you didn't use them in some of your projects?
-
Can Apple's M1 help you train models faster and cheaper than Nvidia's V100?
> But you still lose something, e.g. if you use half precision on V100 you get virtually double speed, if you do on a 1080 / 2080 you get... nothing because it's not supported.
That's not true. FP16 is supported and can be fast on 2080, although some frameworks fail to see the speed-up. I filed a bug report about this a year ago: https://github.com/apache/incubator-mxnet/issues/17665
What consumer GPUs lack is ECC and fast FP64.
What are some alternatives?
Caffe - Caffe: a fast open framework for deep learning.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
Caffe2
mlpack - mlpack: a fast, header-only C++ machine learning library
Porcupine - On-device wake word detection powered by deep learning
Theano - Theano was a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It is being continued as PyTensor: www.github.com/pymc-devs/pytensor
Serpent.AI - Game Agent Framework. Helping you create AIs / Bots that learn to play any game you own!
xgboost - Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
faiss-server - faiss serving :)
catboost - A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
SHOGUN - Shōgun