vit-pytorch
DALLE-pytorch
vit-pytorch | DALLE-pytorch | |
---|---|---|
11 | 20 | |
21,491 | 5,598 | |
2.9% | 0.4% | |
7.8 | 2.5 | |
19 days ago | 11 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vit-pytorch
-
Is it easier to go from Pytorch to TF and Keras than the other way around?
I also need to learn Pyspark so right now I am going to download the Fashion Mnist dataset, use Pyspark to downsize each image and put the into separate folders according to their labels (just to show employers I can do some basic ETL with Pyspark, not sure how I am going to load for training in Pytorch yet though). Then I am going to write the simplest Le Net to try to categorize the fashion MNIST dataset (results will most likely be bad but it's okay). Next, try to learn transfer learning in Pytorch for both CNN or maybe skip ahead to ViT. Ideally at this point I want to study the Attention mechanism a bit more and try to implement Simple Vit which I saw here: https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/simple_vit.py
-
What are the best resources online to learn attention and transformers?
For code implementation, check out this git repo. It contains fairly straightforward PyTorch implementations for various ViT papers with references.
-
Training CNN/VIT on very small dataset
For ViT’s specifically, there’s been a good amount of research trying to extend ViT’s to work on small datasets without a large amount of pre-training (which comes with its own host of issues such as the best way to fine tune such a huge model). One paper which comes to mind is ViT’s for small datasets (https://arxiv.org/abs/2112.13492), which has an implementation in lucidrain’s repo here: https://github.com/lucidrains/vit-pytorch
-
Transformers in RL
Here's a pytorch implementation of ViT https://github.com/lucidrains/vit-pytorch
-
[P] Release the Vision Transformer Cookbook with Tensorflow ! (Thanks to @lucidrains)
looks great Junho! i've linked to it from https://github.com/lucidrains/vit-pytorch like you asked :)
-
Will Transformers Take over Artificial Intelligence?
Sure thing. Also if you're getting into transformers I'd recommend lucidrains's GitHub[0] since it has a large collection of them with links to papers. It's nice that things are consolidated.
[0] https://github.com/lucidrains/vit-pytorch
-
[D] Surprisingly Simple SOTA Self-Supervised Pretraining - Masked Autoencoders Are Scalable Vision Learners by Kaiming He et al. explained (5-minute summary by Casual GAN Papers)
nah, it is really simple. here is the code https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/mae.py
-
[D] Training vision transformers on a specific dataset from scratch
lucid rains VI has all of what you may need in a clean API
- Can I train a tranaformer for image classification on Google colab??
-
[R] Rotary Positional Embeddings - a new relative positional embedding for Transformers that significantly improves convergence (20-30%) and works for both regular and efficient attention
I've attempted it here https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/rvt.py but those who have tried it haven't seen knock out results as 1d. Perhaps the axial lengths are too small to see a benefit
DALLE-pytorch
- The Eleuther AI Mafia
-
Thoughts on AI image generators from text
Here you go: https://github.com/lucidrains/DALLE-pytorch
-
[P] DALL·E Mini & Mega demo and production API
Here are some other implementations of Dalle clones in Pytorch by various authors in the ML and DL community: https://github.com/lucidrains/DALLE-pytorch
- New text-to-image network from Google beats DALL-E
-
[Project] DALL-3 - generate better images with fewer tokens through clip guided diffusion
If in general DDPM > GAN > VAE, why do transformer image generators all use VQVAE to decode images? Wouldn't it be better to use a diffusion model? I was wondering about this and started experimenting with different ways to decode vector-quantized embeddings with a diffusion model - see discussion here After a lot of trial and error I got something that works pretty well.
- Still waiting for dall-e
-
Ask HN: Computer Vision Project Ideas?
- "Discrete VAE", used as the backbone for OpenAI's DALL-E, reimplimented here (and other places) https://github.com/lucidrains/DALLE-pytorch (code for training a discrete VAE)
-
Crawling@Home: Help Build The Worlds Largest Image-Text Pair Dataset!
Here's the DALLE-pytorch git repo.
-
(from the discord stream) I'm so hyped for this game. This generation is really good.
I am very excited, when AI Dungeon was released and seeing them filtering stuff, I thought that one day there will be an open source version of this without filters, the same goes for any future open sourced GPT-X. Now if we can get to train an open source DALL-E too and integrate it on NovelAI. Wouldn't that be even more awesome?
-
Wann habt Ihr euch das letzte Mal wie ein Kind über eine Sache gefreut?
Vielleicht bei https://github.com/lucidrains/DALLE-pytorch und https://github.com/kobiso/DALLE-reproduction
What are some alternatives?
reformer-pytorch - Reformer, the efficient Transformer, in Pytorch
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
DALL-E - PyTorch package for the discrete VAE used for DALL·E.
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
open_clip - An open source implementation of CLIP.
CeiT - Implementation of Convolutional enhanced image Transformer
deep-daze - Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). Technique was originally created by https://twitter.com/advadnoun
Compact-Transformers - Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)
CoCa-pytorch - Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
MLP-Mixer-pytorch - Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision
imagen-pytorch - Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch