big_vision
Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more. (by google-research)
Simple-ViT-flax
By conceptofmind
big_vision | Simple-ViT-flax | |
---|---|---|
5 | 2 | |
1,746 | 4 | |
14.6% | - | |
7.1 | 3.2 | |
5 days ago | almost 2 years ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
big_vision
Posts with mentions or reviews of big_vision.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-04-13.
-
I accidentally built a meme search engine
I think this is based off Google research https://github.com/google-research/big_vision
-
Show HN: I made a Pinterest clone using SigLIP image embeddings
The the vision training models are available here: https://github.com/google-research/big_vision/tree/main which I am assuming, based on the research paper is what was used for the project.
-
[D] What are the strongest plain baselines for Vision Transformers on ImageNet?
Found relevant code at https://github.com/google-research/big_vision + all code implementations here
-
[P] Simple ViT Implementation in Flax
Official Github repository: https://github.com/google-research/big_vision
-
Open-Source Simple-ViT Implementation
An open-source implementation of the Better plain ViT baselines for ImageNet-1k research paper in Google's JAX and Flax.
An update from some of the same authors of the original paper proposes simplifications to ViT that allows it to train faster and better.
Among these simplifications include 2d sinusoidal positional embedding, global average pooling (no CLS token), no dropout, batch sizes of 1024 rather than 4096, and use of RandAugment and MixUp augmentations. They also show that a simple linear at the end is not significantly worse than the original MLP head.
Simple ViT Research Paper: https://arxiv.org/abs/2205.01580
Official Github repository: https://github.com/google-research/big_vision
Developer updates can be found on: https://twitter.com/EnricoShippole
In collaboration with Dr. Phil 'Lucid' Wang: https://github.com/lucidrains
Simple-ViT-flax
Posts with mentions or reviews of Simple-ViT-flax.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-07-10.
-
[P] Simple ViT Implementation in Flax
Github repository for the Flax / JAX model: https://github.com/conceptofmind/Simple-ViT-flax
- Open-Source Simple-ViT Implementation