dino
pytorch-lightning
dino | pytorch-lightning | |
---|---|---|
7 | 9 | |
5,881 | 26,952 | |
1.4% | 1.3% | |
1.0 | 9.9 | |
24 days ago | 2 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dino
- Batch-wise processing or image-by-image processing? (DINO V1)
-
[P] Image search with localization and open-vocabulary reranking.
I also implemented one based on the self attention maps from the DINO trained ViT’s. This worked pretty well when the attention maps were combined with some traditional computer vision to get bounding boxes. It seemed an ok compromise between domain specialization and location specificity. I did not try any saliency or gradient based methods as i was not sure on generalization and speed respectively. I know LAVIS has an implementation of grad cam and it seems to work well in the plug'n'play vqa.
-
Unsupervised semantic segmentation
You will probably need an unwieldy amount of data and compute to reproduce it, so your best option would be to use the pretrained models available on github.
-
[D] Why Transformers are taking over the Compute Vision world: Self-Supervised Vision Transformers with DINO explained in 7 minutes!
[Full Explanation Post] [Arxiv] [Project Page]
-
A major part of real-world AI has to be solved to make unsupervised, generalized full self-driving work, as the entire road system is designed for biological neural nets with optical imagers
Except he is actually talking about the new DINO model created by facebook that was released on friday. Which is a new approach to image transformers for unsupervised segmentation. Here's its github.
-
[D] Paper Explained - DINO: Emerging Properties in Self-Supervised Vision Transformers (Full Video Analysis)
Code: https://github.com/facebookresearch/dino
- [R] DINO and PAWS: Advancing the state of the art in computer vision with self-supervised Transformers
pytorch-lightning
-
SB-1047 will stifle open-source AI and decrease safety
It's very easy to get started, right in your Terminal, no fees! No credit card at all.
And there are cloud providers like https://replicate.com/ and https://lightning.ai/ that will let you use your LLM via an API key just like you did with OpenAI if you need that.
You don't need OpenAI - nobody does.
- Lightning AI Studios – A persistent GPU cloud environment
-
Como empezar con inteligencia artificial?
https://see.stanford.edu/Course/CS229 https://lightning.ai/ https://www.youtube.com/watch?v=00s9ireCnCw&t=57s https://towardsdatascience.com/
-
Best practice for saving logits/activation values of model in PyTorch Lightning
I've been wondering on what is the recommended method of saving logits/activations using PyTorch Lightning. I've looked at Callbacks, Loggers and ModelHooks but none of the use-cases seem to be for this kind of activity (even if I were to create my own custom variants of each utility). The ModelCheckpoint Callback in its utility makes me feel like custom Callbacks would be the way to go but I'm not quite sure. This closed GitHub issue does address my issue to some extent.
- New to ML, which is easier to learn - Tensorflow or PyTorch?
- PyTorch Lightning – DL framework to train, deploy, and ship AI fast
-
We just release a complete open-source solution for accelerating Stable Diffusion pretraining and fine-tuning!
Our codebase for the diffusion models builds heavily on OpenAI's ADM codebase , lucidrains, Stable Diffusion, Lightning and Hugging Face. Thanks for open-sourcing!
-
An elegant and strong PyTorch Trainer
For lightweight use, pytorch-lightning is too heavy, and its source code will be very difficult for beginners to read, at least for me.
-
[D] Mixed Precision Training: Difference between BF16 and FP16
For the A100 GPU, theoretical performance is the same for FP16/BF16 and both rely on the same number of bits, meaning memory should be the same. However since it's quite newly added to PyTorch, performance seems to still be dependent on underlying operators used (pytorch lightning debugging in progress here).
What are some alternatives?
simsiam-cifar10 - Code to train the SimSiam model on cifar10 using PyTorch
lnd - Lightning Network Daemon ⚡️
Transformer-SSL - This is an official implementation for "Self-Supervised Learning with Swin Transformers".
Eclair - A scala implementation of the Lightning Network.
pytorch-metric-learning - The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
mmdetection - OpenMMLab Detection Toolbox and Benchmark
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
composer - Supercharge Your Model Training
unsupervised-depth-completion-visual-inertial-odometry - Tensorflow and PyTorch implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)
umbrel - A beautiful home server OS for self-hosting with an app store. Buy a pre-built Umbrel Home with umbrelOS, or install on a Raspberry Pi 4, Pi 5, any Ubuntu/Debian system, or a VPS.
lightly - A python library for self-supervised learning on images.
fastai - The fastai deep learning library