continual-pretraining-nlp-vision
vision-transformer-from-scratch
continual-pretraining-nlp-vision | vision-transformer-from-scratch | |
---|---|---|
1 | 1 | |
14 | 88 | |
- | - | |
4.8 | 4.9 | |
7 months ago | 10 months ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
continual-pretraining-nlp-vision
-
[2205.09357] Continual Pre-Training Mitigates Forgetting in Language and Vision
Code for https://arxiv.org/abs/2205.09357 found: https://github.com/AndreaCossu/continual-pretraining-nlp-vision
vision-transformer-from-scratch
-
[P] Implementing Vision Transformer (ViT) from Scratch using PyTorch
Github: https://github.com/tintn/vision-transformer-from-scratch
What are some alternatives?
gan-vae-pretrained-pytorch - Pretrained GANs + VAEs + classifiers for MNIST/CIFAR in pytorch.
super-gradients - Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.
awesome-machine-unlearning - Awesome Machine Unlearning (A Survey of Machine Unlearning)
Transformers-Tutorials - This repository contains demos I made with the Transformers library by HuggingFace.
maxvit - [ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmentation, image quality, and generative modeling...
glami-1m - The largest multilingual image-text classification dataset. It contains fashion products.
HugsVision - HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision