framework-reproducibility
wandb
framework-reproducibility | wandb | |
---|---|---|
5 | 16 | |
418 | 8,294 | |
1.2% | 2.6% | |
5.8 | 9.9 | |
7 months ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
framework-reproducibility
-
Tensorflow: I'm getting different results from the same code depending on where I run it. [D]
Even with a fixed seed there's no guarantee that you'll get the exact same results due to the fact that most floating operations are not deterministic when parallelized. You can enable determinism flags in your framework to try and mitigate that, but results may still vary depending on your model and how you're running it.
- Same seed, different images
-
Dealing with non-deterministic result
Setting the seed alone is not enough because there will be a randomness resulted from GPU operations (there is some way to eliminate randomness due to GPU operations like https://github.com/NVIDIA/framework-determinism, but I cannot make it work with the current latest version of TF). Another workaround is not using GPU, but the training time does not make sense as I need to iterate fast, trying new idea.
- No Bee, it's you...
-
[D] Do you yourself write 100% reproducible ML code?
check out https://github.com/NVIDIA/framework-determinism, which should allow you to make fully reproducible to the bit code that runs on GPU. i've contributed to this repo and the author is extremely helpful.
wandb
-
A list of SaaS, PaaS and IaaS offerings that have free tiers of interest to devops and infradev
Weights & Biases — The developer-first MLOps platform. Build better models faster with experiment tracking, dataset versioning, and model management. Free tier for personal projects only, with 100 GB of storage included.
- Northlight makes Alan Wake 2 shine
-
The last sentence of Lowes conveniently missing from OpenAI...
HuggingFace and wandb.ai (both competitors of OpenAI) both also have "do own research"
-
Efficient way to tune a network by changing hyperparameters?
Wandb is the best! https://wandb.ai/
-
[D] Monitoring production image models
To track stuff I've used wandb.ai in a company in the past, as someone else pointed out. Regarding metrics... This is really specific to your domain, and it is such a broad question. You could count color pixels, the distribution of intensity histograms, etc etc.
-
How to use the colab notebook version of Dall-E mini and bypass the traffic limit - A guide
Step 1: The colab notebook uses wandb.ai, so you need to register for a wandb.ai account beforehand if you want to use the colab notebook. After registering you need to go to your homepage and copy the API key and paste/keep it somewhere.
-
Roadmap for learning MLOps (for DevOps engineers)
I want to take a look at tools like https://wandb.ai/ and they would integrate into some of the pipelines I'm playing with.
-
What's a sequel that got you thinking "the people who made this COMPLETELY missed the point of the first one"?
does current cgi and ai tech can bring back leslie nielsen? might use unreal engine and https://www.resemble.ai/ or https://wandb.ai/?
-
What MLOps tools and processes do you use?
I'm currently working for a MLOps company so I'm heavily using their tools (Weights & Biases) but I've used custom C++ for deployment, Pytorch + fastai for quick experimentation, Weights & Biases for experiment tracking, hyper-parameter tuning + model versioning (hence why I went to work for them), custom database + data pipeline, HoloViz for data visualisation (really nice dashboarding tool), Jenkins for CI/CD, I also love Github Actions.
- [D] Best resources or tools to draw nicer table for comparing different models/frameworks performance
What are some alternatives?
einops - Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
tensorboard - TensorFlow's Visualization Toolkit
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
aim - Aim 💫 — An easy-to-use & supercharged open-source experiment tracker.
horovod - Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
guildai - Experiment tracking, ML developer tools
pytorch-summary - Model summary in PyTorch similar to `model.summary()` in Keras
cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
Tetris-deep-Q-learning-pytorch - Deep Q-learning for playing tetris game
tmrl - Reinforcement Learning for real-time applications - host of the TrackMania Roborace League
deephyper - DeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks