dino
lightly
dino | lightly | |
---|---|---|
7 | 16 | |
6,697 | 3,311 | |
2.1% | 1.6% | |
0.0 | 9.3 | |
9 months ago | 9 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dino
- Batch-wise processing or image-by-image processing? (DINO V1)
-
[P] Image search with localization and open-vocabulary reranking.
I also implemented one based on the self attention maps from the DINO trained ViT’s. This worked pretty well when the attention maps were combined with some traditional computer vision to get bounding boxes. It seemed an ok compromise between domain specialization and location specificity. I did not try any saliency or gradient based methods as i was not sure on generalization and speed respectively. I know LAVIS has an implementation of grad cam and it seems to work well in the plug'n'play vqa.
-
Unsupervised semantic segmentation
You will probably need an unwieldy amount of data and compute to reproduce it, so your best option would be to use the pretrained models available on github.
-
[D] Why Transformers are taking over the Compute Vision world: Self-Supervised Vision Transformers with DINO explained in 7 minutes!
[Full Explanation Post] [Arxiv] [Project Page]
-
A major part of real-world AI has to be solved to make unsupervised, generalized full self-driving work, as the entire road system is designed for biological neural nets with optical imagers
Except he is actually talking about the new DINO model created by facebook that was released on friday. Which is a new approach to image transformers for unsupervised segmentation. Here's its github.
-
[D] Paper Explained - DINO: Emerging Properties in Self-Supervised Vision Transformers (Full Video Analysis)
Code: https://github.com/facebookresearch/dino
- [R] DINO and PAWS: Advancing the state of the art in computer vision with self-supervised Transformers
lightly
- Show HN: Lightly – A Python library for self-supervised learning on images
- GitHub - lightly-ai/lightly: A python library for self-supervised learning on images.
- A Python library for self-supervised learning on images
-
[P] Release of lightly 1.2.39 - A python library for self-supervised learning
Another year of has passed, and we’ve seen exciting progress in research around self-supervised learning in computer vision. We’re very excited that some of the recent models such as Masked Autoencoders (MAE) or Masked Siamese Networks (MSN) have been added to our OSS framework.
-
Self-Supervised Models are More Robust and Fair
If you’re interested in self-supervised learning and want to try it out yourself you can check out our open-source repository for self-supervised learning.
-
[D] Can a Siamese Neural Network work for invoice classification?
I assume that you have an image of the invoice. Then using a framework like https://github.com/lightly-ai/lightly with many implemented algorithms is the way to go. And after that step, with model-producing embeddings, you need to compare the embedding of a query with your known database and check if the distance is below some threshold. Of course, pipeline with checking the closest neighbor can be more complicated but I would start with sth really simple.
-
[P] TensorFlow Similarity now self-supervised training
https://github.com/lightly-ai/lightly implements a lot of self supervised models, and had been available for a while.
-
Launch HN: Lightly (YC S21): Label only the data which improves your ML model
modAL indeed has a similar goal of choosing the best subset of data to be labeled. However it has some notable differences:
modAL is built on scikit-learn which is also evident from the suggested workflow. Lightly on the other hand was specifically built for deep learning applications supporting active learning for classification but also object detection and semantic segmentation.
modAL provides uncertainty-based active learning. However, it has been shown that uncertainty-based AL fails at batch-wise AL for vision datasets and CNNs, see https://arxiv.org/abs/1708.00489. Furthermore it only works with an initially trained model and thus labeled dataset. Lightly offers self-supervised learning to learn high dimensional embeddings through its open-source package https://github.com/lightly-ai/lightly. They can be used through our API to choose a diverse subset. Optionally, this sampling can be combined with uncertainty-based AL.
- Lightly – A Python library for self-supervised learning on images
-
Active Learning using Detectron2
You can easily train, embed, and upload a dataset using the lightly Python package. First, we need to install the package. We recommend using pip for this. Make sure you're in a Python3.6+ environment. If you're on Windows you should create a conda environment.
What are some alternatives?
pytorch-metric-learning - The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
simsiam-cifar10 - Code to train the SimSiam model on cifar10 using PyTorch
DataProfiler - What's in your data? Extract schema, statistics and entities from datasets
Transformer-SSL - This is an official implementation for "Self-Supervised Learning with Swin Transformers".
byol-pytorch - Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch