mxnet
SHOGUN
Our great sponsors
mxnet | SHOGUN | |
---|---|---|
4 | 1 | |
20,644 | 3,005 | |
- | 0.5% | |
4.1 | 4.8 | |
6 months ago | 4 months ago | |
C++ | C++ | |
Apache License 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mxnet
-
List of AI-Models
Click to Learn more...
-
Introduction to deep learning hardware in the cloud
Build – Choose a machine learning framework (such as TensorFlow, PyTorch, Apache MXNet, etc.)
-
just released my Clojure AI book
Clojure and Python also have bindings to the Apache MXNet library. Is there a reason why you didn't use them in some of your projects?
-
Can Apple's M1 help you train models faster and cheaper than Nvidia's V100?
> But you still lose something, e.g. if you use half precision on V100 you get virtually double speed, if you do on a 1080 / 2080 you get... nothing because it's not supported.
That's not true. FP16 is supported and can be fast on 2080, although some frameworks fail to see the speed-up. I filed a bug report about this a year ago: https://github.com/apache/incubator-mxnet/issues/17665
What consumer GPUs lack is ECC and fast FP64.
SHOGUN
-
Changing std:sort at Google’s Scale and Beyond
The function is trying to get the median, which is not defined for an empty set. With this particular implementation, there is an assert for that:
https://github.com/shogun-toolbox/shogun/blob/9b8d85/src/sho...
Unrelatedly, but from the same section:
> Fixes are trivial, access the nth element only after the call being made. Be careful.
Wouldn't the proper fix to do the nth_element for the larget element first (for those cases that don't do that already) and then adjust the end to be the begin + larger_n for the second nth_element call? Otherwise the second call will check [begin + larger_n, end) again for no reason at all.
What are some alternatives?
Caffe - Caffe: a fast open framework for deep learning.
mlpack - mlpack: a fast, header-only C++ machine learning library
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
Caffe2
Dlib - A toolkit for making real world machine learning and data analysis applications in C++
vowpal_wabbit - Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.
Porcupine - On-device wake word detection powered by deep learning
xgboost - Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
Theano - Theano was a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It is being continued as PyTensor: www.github.com/pymc-devs/pytensor
OpenHotspot