|3 days ago||7 months ago|
|Apache License 2.0||Apache License 2.0|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
[D] Colab TPU low performance
2 projects | reddit.com/r/MachineLearning | 18 Nov 2021
I wanted to make a quick performance comparison between the GPU (Tesla K80) and TPU (v2-8) available in Google Colab with PyTorch. To do so quickly, I used an MNIST example from pytorch-lightning that trains a simple CNN.
[D] How to avoid CPU bottlenecking in PyTorch - training slowed by augmentations and data loading?
2 projects | reddit.com/r/MachineLearning | 10 Nov 2021
We've noticed GPU 0 on our 3 GPU system is sometimes idle (which would explain performance differences). However its unclear to us why that may be. Similar to this issue
[P] An introduction to PyKale https://github.com/pykale/pykale, a PyTorch library that provides a unified pipeline-based API for knowledge-aware multimodal learning and transfer learning on graphs, images, texts, and videos to accelerate interdisciplinary research. Welcome feedback/contribution!
2 projects | reddit.com/r/MachineLearning | 25 Apr 2021
If you want a good example for reference, take a look at Pytorch Lightning's readme (https://github.com/PyTorchLightning/pytorch-lightning) It answers the 3 questions of "what is this", "why should I care", and "how do i use it" almost instantly
2 projects | reddit.com/r/pytorch | 24 Apr 2021
[D] Advanced Takeaways from fast.ai book
2 projects | reddit.com/r/MachineLearning | 23 Mar 2021
Lower precision training can help and on pytorch lightning is just a simple flag you can set
[D] How to be more productive while doing Deep Learning experiments?
10 projects | reddit.com/r/MachineLearning | 25 Feb 2021
First of all, use high-level ML frameworks (AllenNLP, PyTorch-Lightning). No need to write boilerplate code and implement standard ML approaches from scratch. Here are some suggestions (thought more NLP-focused) that I feel improved my research coding experience a lot.
DDP with model parallelism with multi host multi GPU system
1 project | reddit.com/r/pytorch | 7 Feb 2021
PyTorch Lightning Flash appears to be copying fastai (without any credit) [D]
2 projects | reddit.com/r/MachineLearning | 5 Feb 2021
According to the README it's patent pending, but I learned about that from this HN thread. Funny thing is I didn't even remember there was a snafu about patents, but looked it up because of some vague recollection of the PL founder getting into a tussle about some other trivial topic (apparently it was how well PyTorch works on TPUs).
[D] Training 10x Larger Models and Accelerating Training with ZeRO-Offloading
3 projects | reddit.com/r/MachineLearning | 25 Jan 2021
I also asked for the respective support in PytorchLightning in this issue: Add deepspeed support · Issue #817 · PyTorchLightning/pytorch-lightning (github.com)
Nicest and cleanest Deep Learning codebases out there
3 projects | reddit.com/r/deeplearning | 22 Jan 2021
When I look at the pytorch lightning animation, the stuff on the left for me is easy to follow and the code on the right formatted into classes is hard. My goal is to to start thinking and coding more like the code on the right. What I typically find hard with reading through code where everything is inside classes, methods, functions, decorators etc (i.e. the code on the right) is that there will be a place that executes all these methods in a linear way, but I keep having to scroll up to the class to see what it is actually doing. On the left I can just read through the code top to bottom. I even find myself copying the code out of classes the first time I read it so it executes like the code on the left :P I feel like what I'm doing is the equivalent of typing with only my index fingers…
What are some alternatives?
detectron2 - Detectron2 is FAIR's next-generation platform for object detection, segmentation and other visual recognition tasks.
mmdetection - OpenMMLab Detection Toolbox and Benchmark
pytorch-grad-cam - Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Sacred - Sacred is a tool to help you configure, organize, log and reproduce experiments developed at IDSIA.
metaflow - :rocket: Build and manage real-life data science projects with ease!
pytorch-forecasting - Time series forecasting with PyTorch
guildai - Experiment tracking, ML developer tools
tmux - tmux source code
omegaconf - Flexible Python configuration system. The last one you will ever need.
Keras - Deep Learning for humans
fiftyone - The open-source tool for building high-quality datasets and computer vision models
QUANTAXIS - QUANTAXIS 支持任务调度 分布式部署的 股票/期货/期权/港股/虚拟货币 数据/回测/模拟/交易/可视化/多账户 纯本地量化解决方案