vision_transformer
fashion-200k
vision_transformer | fashion-200k | |
---|---|---|
7 | 1 | |
9,287 | 60 | |
2.2% | - | |
5.5 | 10.0 | |
about 2 months ago | about 2 years ago | |
Jupyter Notebook | ||
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vision_transformer
-
Can I use CLIP to tag my picture collection?
And one last thing, should I even be thinking of using CLIP for these tasks when Google has released a better model here: https://github.com/google-research/vision_transformer/blob/main/model_cards/lit.md
-
When the client's management is happy but their dev team is a pain
Google's vision transformers are type hinted.
-
Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models
We’re going to look at a model that Open AI has trained with a broad multilingual dataset: The xlm-roberta-base-ViT-B-32 CLIP model, which uses the ViT-B/32image encoder, and the XLM-RoBERTa multilingual language model. Both of these are pre-trained:
-
[R] How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
JAX Code: https://github.com/google-research/vision_transformer
- [D] (Paper Overview) MLP-Mixer: An all-MLP Architecture for Vision
-
[P] Animesion: a framework, for anime (and related) character recognition. It uses Vision Transformers trained on a subset of Danbooru2018, that we rebranded as DAF:re, and can classify a given image into one of more than 3000 characters! Source code and checkpoints included.
For this project I used the pretrained models released by Google in Jax, using this particular PyTorch custom implementation. Those were pretrained on ImageNet21k with 14 M images among 21 K classes. Then yes I finetune on two datasets: one with 15 K images and 170 characters, and one with 3 K characters and almost 500 K images.
- Short term memory solutions for video tasks?
fashion-200k
-
Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models
The images are a subset of the xthan/fashion-200k dataset, and we have commissioned their human annotations via Toloka’s crowdsourcing platform. Annotations were made in two steps. First, Toloka passed the 12,000 images to annotators in their large international user community, who added descriptive captions.
What are some alternatives?
pytorch-image-models - PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
nerfstudio - A collaboration friendly studio for NeRFs
ImageNet21K - Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper
Fashion12K_german_queries
TorchSharp - A .NET library that provides access to the library that powers PyTorch.
typeshed - Collection of library stubs for Python, with static types
docarray - Represent, send, store and search multimodal data
beartype - Unbearably fast near-real-time hybrid runtime-static type-checking in pure Python.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration