vision_transformer
vision_transformer | Fashion12K_german_queries | |
---|---|---|
7 | 1 | |
9,287 | 3 | |
2.2% | - | |
5.5 | 0.0 | |
about 2 months ago | about 1 year ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Creative Commons Attribution 4.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vision_transformer
-
Can I use CLIP to tag my picture collection?
And one last thing, should I even be thinking of using CLIP for these tasks when Google has released a better model here: https://github.com/google-research/vision_transformer/blob/main/model_cards/lit.md
-
When the client's management is happy but their dev team is a pain
Google's vision transformers are type hinted.
-
Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models
We’re going to look at a model that Open AI has trained with a broad multilingual dataset: The xlm-roberta-base-ViT-B-32 CLIP model, which uses the ViT-B/32image encoder, and the XLM-RoBERTa multilingual language model. Both of these are pre-trained:
-
[R] How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
JAX Code: https://github.com/google-research/vision_transformer
- [D] (Paper Overview) MLP-Mixer: An all-MLP Architecture for Vision
-
[P] Animesion: a framework, for anime (and related) character recognition. It uses Vision Transformers trained on a subset of Danbooru2018, that we rebranded as DAF:re, and can classify a given image into one of more than 3000 characters! Source code and checkpoints included.
For this project I used the pretrained models released by Google in Jax, using this particular PyTorch custom implementation. Those were pretrained on ImageNet21k with 14 M images among 21 K classes. Then yes I finetune on two datasets: one with 15 K images and 170 characters, and one with 3 K characters and almost 500 K images.
- Short term memory solutions for video tasks?
Fashion12K_german_queries
-
Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models
We have collaborated with Toloka to curate a 12,000 item dataset of fashion images drawn from e-commerce websites, to which human annotators have added descriptive captions in German. Toloka has made the data available to the public on GitHub, but you can also download it from Jina directly in DocArray format by following the instructions in the next section.
What are some alternatives?
pytorch-image-models - PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
ImageNet21K - Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper
nerfstudio - A collaboration friendly studio for NeRFs
TorchSharp - A .NET library that provides access to the library that powers PyTorch.
fashion-200k - Fashion 200K dataset used in paper "Automatic Spatially-aware Fashion Concept Discovery."
typeshed - Collection of library stubs for Python, with static types
docarray - Represent, send, store and search multimodal data
beartype - Unbearably fast near-real-time hybrid runtime-static type-checking in pure Python.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration