framework-reproducibility
horovod
framework-reproducibility | horovod | |
---|---|---|
5 | 8 | |
418 | 13,969 | |
1.2% | 0.5% | |
5.8 | 5.2 | |
7 months ago | about 2 months ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
framework-reproducibility
-
Tensorflow: I'm getting different results from the same code depending on where I run it. [D]
Even with a fixed seed there's no guarantee that you'll get the exact same results due to the fact that most floating operations are not deterministic when parallelized. You can enable determinism flags in your framework to try and mitigate that, but results may still vary depending on your model and how you're running it.
- Same seed, different images
-
Dealing with non-deterministic result
Setting the seed alone is not enough because there will be a randomness resulted from GPU operations (there is some way to eliminate randomness due to GPU operations like https://github.com/NVIDIA/framework-determinism, but I cannot make it work with the current latest version of TF). Another workaround is not using GPU, but the training time does not make sense as I need to iterate fast, trying new idea.
- No Bee, it's you...
-
[D] Do you yourself write 100% reproducible ML code?
check out https://github.com/NVIDIA/framework-determinism, which should allow you to make fully reproducible to the bit code that runs on GPU. i've contributed to this repo and the author is extremely helpful.
horovod
-
Discussion Thread
Broke: using Horovod
-
[D] What is the recommended approach to training NN on big data set?
And in case scaling is really important to you. May I suggest you look into Horovod?
-
Anyone know of any papers or models for segmenting satellite images of a city into things like roads, buildings, parks, etc?
Training is not the same as inference (doing the segmentation), so that scale is probably off by a lot. One or two orders of magnitude just depending on the specifics of what hardware you're running on, and your training and eval dataset would be several orders of magnitude smaller. FAANGs would parallelize that training as well (don't remember if UNet is inherently parallelizable for training) via their internal equivalent of Horovod, so they'll do a GPU-month worth of training in less than a day.
-
Embedding Python
[[email protected]] match_arg (utils/args/args.c:163): unrecognized argument quiet [[email protected]] HYDU_parse_array (utils/args/args.c:178): argument matching returned error [[email protected]] parse_args (ui/mpich/utils.c:1639): error parsing input array [[email protected]] HYD_uii_mpx_get_parameters (ui/mpich/utils.c:1691): unable to parse user arguments [[email protected]] main (ui/mpich/mpiexec.c:127): error parsing parameters I believe this is due to mpich being installed: https://github.com/horovod/horovod/issues/1637
-
[D] PyTorch Distributed Training Libraries: What are the current options?
Check out Horovod - https://github.com/horovod/horovod
-
[D] GPU buying recommendation
If you just want to run tensorflow or pytorch for a Jupyter notebook, setting the environment shouldn't be difficult. I know that AWS has a marketplace of preconfigured images. However, you can go as advanced as setting up a cluster of gpu-equipped nodes to setup Horovod (https://github.com/horovod/horovod) to do distributed machine learning. Yes, there's a learning curve, but you cannot acquire this skillet any other way.
-
SKLean, TensorFlow, etc vs Spark ML?
I'm the maintainer for an open source project called Horovod that allows you to distribute deep learning training (e.g., TensorFlow) on platforms like Spark.
-
Cluster machine learning
You'll want to use horovod to run keras in a distributed system. Then use Slurm to manage the cluster and run the job.
What are some alternatives?
einops - Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
petastorm - Petastorm library enables single machine or distributed training and evaluation of deep learning models from datasets in Apache Parquet format. It supports ML frameworks such as Tensorflow, Pytorch, and PySpark and can be used from pure Python code.
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
DeepDanbooru - AI based multi-label girl image classification system, implemented by using TensorFlow.
mpi4jax - Zero-copy MPI communication of JAX arrays, for turbo-charged HPC applications in Python :zap:
NudeNet - Neural Nets for Nudity Detection and Censoring
onepanel - The open source, end-to-end computer vision platform. Label, build, train, tune, deploy and automate in a unified platform that runs on any cloud and on-premises.
thinc - 🔮 A refreshing functional take on deep learning, compatible with your favorite libraries
seq2seq - A general-purpose encoder-decoder framework for Tensorflow
pytorch-summary - Model summary in PyTorch similar to `model.summary()` in Keras
nsfw_model - Keras model of NSFW detector