big_vision
Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more. (by google-research)
deit
Official DeiT repository (by facebookresearch)
big_vision | deit | |
---|---|---|
5 | 2 | |
1,582 | 3,822 | |
5.8% | - | |
7.1 | 5.1 | |
about 1 month ago | 2 months ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
big_vision
Posts with mentions or reviews of big_vision.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-04-13.
-
I accidentally built a meme search engine
I think this is based off Google research https://github.com/google-research/big_vision
-
Show HN: I made a Pinterest clone using SigLIP image embeddings
The the vision training models are available here: https://github.com/google-research/big_vision/tree/main which I am assuming, based on the research paper is what was used for the project.
-
[D] What are the strongest plain baselines for Vision Transformers on ImageNet?
Found relevant code at https://github.com/google-research/big_vision + all code implementations here
-
[P] Simple ViT Implementation in Flax
Official Github repository: https://github.com/google-research/big_vision
-
Open-Source Simple-ViT Implementation
An open-source implementation of the Better plain ViT baselines for ImageNet-1k research paper in Google's JAX and Flax.
An update from some of the same authors of the original paper proposes simplifications to ViT that allows it to train faster and better.
Among these simplifications include 2d sinusoidal positional embedding, global average pooling (no CLS token), no dropout, batch sizes of 1024 rather than 4096, and use of RandAugment and MixUp augmentations. They also show that a simple linear at the end is not significantly worse than the original MLP head.
Simple ViT Research Paper: https://arxiv.org/abs/2205.01580
Official Github repository: https://github.com/google-research/big_vision
Developer updates can be found on: https://twitter.com/EnricoShippole
In collaboration with Dr. Phil 'Lucid' Wang: https://github.com/lucidrains
deit
Posts with mentions or reviews of deit.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-02-13.
-
Exploring GradCam and More with FiftyOne
We can now apply the transform and try ourselves with a vision transformer. We will use DeiT from the FiftyOne model zoo. It can be loaded with a pre and postprocessor like the following:
-
[D] What are the strongest plain baselines for Vision Transformers on ImageNet?
I think Deit III is pretty sota