x-transformers
DALLE-pytorch
x-transformers | DALLE-pytorch | |
---|---|---|
10 | 20 | |
4,760 | 5,569 | |
- | - | |
9.0 | 2.5 | |
6 days ago | 9 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
x-transformers
- x-transformers
- GPT-4 architecture: what we can deduce from research literature
- Doubt about transformers
-
The GPT Architecture, on a Napkin
it is all documented here, in writing and in code https://github.com/lucidrains/x-transformers
you will want to use rotary embeddings, if you do not need length extrapolation
-
[R] Deepmind's Gato: a generalist learning agent
it is just a single transformer encoder, so just use https://github.com/lucidrains/x-transformers with ff_glu set to True
-
[D] Transformer sequence generation - is it truly quadratic scaling?
However, I've come across the concept of Key, Value Caching in Transformer-Decoders recently (e.g. Figure 3 here), wherein because each output (and hence each input, since the model is autoregressive) only depends on previous outputs (inputs), we don't need to re-compute Key and Value vectors for all t < t_i at timestep i of the sequence. My intuition leads me to believe, then, that (unconditioned) inference for a decoder-only model uses an effective sequence length of 1 (the most recently produced token is the only real input that requires computation on), making Attention a linear-complexity operation. This thinking seems to be validated by this github issue, and this paper (2nd paragraph of Introduction).
-
[D] Sudden drop in loss after hours of no improvement - is this a thing?
The Project - Model: The primary architecture consists of a CNN with a transformer encoder and decoder. At first, I used my implementation of self-attention. Still, due to it not converging, I switched to using x-transformer implementation by lucidrains - as it includes improvements from many papers. The objective is simple; the CNN encoder converts images to a high-level representation; feeds them to the transformer encoder for information flow. Finally, a transformer decoder tries to decode the text character-by-character using autoregressive loss. After two weeks of trying around different things, the training did not converge within the first hour - as this is the usual mark I use to validate if a model is learning or not.
-
Hacker News top posts: May 9, 2021
X-Transformers: A fully-featured transformer with experimental features\ (25 comments)
- X-Transformers: A fully-featured transformer with experimental features
-
[D] Theoretical papers on transformers? (or attention mechanism, or just seq2seq?)
One thing I’ve looked at is the fact that there’s no obvious reason to distinguish between W_K and W_Q in the formulation of a transformer as far as I can tell. However if you build a transformer where you merge the two matrices, it doesn’t learn as well. It still learns, but not as well. You can try out the code here. The training loss can be seen here, though we aborted the run because of how poorly it was doing.
DALLE-pytorch
- The Eleuther AI Mafia
-
Thoughts on AI image generators from text
Here you go: https://github.com/lucidrains/DALLE-pytorch
-
[P] DALL·E Mini & Mega demo and production API
Here are some other implementations of Dalle clones in Pytorch by various authors in the ML and DL community: https://github.com/lucidrains/DALLE-pytorch
- New text-to-image network from Google beats DALL-E
-
[Project] DALL-3 - generate better images with fewer tokens through clip guided diffusion
If in general DDPM > GAN > VAE, why do transformer image generators all use VQVAE to decode images? Wouldn't it be better to use a diffusion model? I was wondering about this and started experimenting with different ways to decode vector-quantized embeddings with a diffusion model - see discussion here After a lot of trial and error I got something that works pretty well.
- Still waiting for dall-e
-
Ask HN: Computer Vision Project Ideas?
- "Discrete VAE", used as the backbone for OpenAI's DALL-E, reimplimented here (and other places) https://github.com/lucidrains/DALLE-pytorch (code for training a discrete VAE)
-
Crawling@Home: Help Build The Worlds Largest Image-Text Pair Dataset!
Here's the DALLE-pytorch git repo.
-
(from the discord stream) I'm so hyped for this game. This generation is really good.
I am very excited, when AI Dungeon was released and seeing them filtering stuff, I thought that one day there will be an open source version of this without filters, the same goes for any future open sourced GPT-X. Now if we can get to train an open source DALL-E too and integrate it on NovelAI. Wouldn't that be even more awesome?
-
Wann habt Ihr euch das letzte Mal wie ein Kind über eine Sache gefreut?
Vielleicht bei https://github.com/lucidrains/DALLE-pytorch und https://github.com/kobiso/DALLE-reproduction
What are some alternatives?
EasyOCR - Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.
DALL-E - PyTorch package for the discrete VAE used for DALL·E.
TimeSformer-pytorch - Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
flamingo-pytorch - Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
deep-daze - Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). Technique was originally created by https://twitter.com/advadnoun
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
DALLE-datasets - This is a summary of easily available datasets for generalized DALLE-pytorch training.
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
CoCa-pytorch - Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
perceiver-pytorch - Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch
imagen-pytorch - Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch