x-transformers
perceiver-pytorch
x-transformers | perceiver-pytorch | |
---|---|---|
10 | 11 | |
4,147 | 1,048 | |
- | - | |
8.7 | 3.1 | |
3 days ago | 8 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
x-transformers
- x-transformers
- GPT-4 architecture: what we can deduce from research literature
- Doubt about transformers
-
The GPT Architecture, on a Napkin
it is all documented here, in writing and in code https://github.com/lucidrains/x-transformers
you will want to use rotary embeddings, if you do not need length extrapolation
-
[R] Deepmind's Gato: a generalist learning agent
it is just a single transformer encoder, so just use https://github.com/lucidrains/x-transformers with ff_glu set to True
-
[D] Transformer sequence generation - is it truly quadratic scaling?
However, I've come across the concept of Key, Value Caching in Transformer-Decoders recently (e.g. Figure 3 here), wherein because each output (and hence each input, since the model is autoregressive) only depends on previous outputs (inputs), we don't need to re-compute Key and Value vectors for all t < t_i at timestep i of the sequence. My intuition leads me to believe, then, that (unconditioned) inference for a decoder-only model uses an effective sequence length of 1 (the most recently produced token is the only real input that requires computation on), making Attention a linear-complexity operation. This thinking seems to be validated by this github issue, and this paper (2nd paragraph of Introduction).
-
[D] Sudden drop in loss after hours of no improvement - is this a thing?
The Project - Model: The primary architecture consists of a CNN with a transformer encoder and decoder. At first, I used my implementation of self-attention. Still, due to it not converging, I switched to using x-transformer implementation by lucidrains - as it includes improvements from many papers. The objective is simple; the CNN encoder converts images to a high-level representation; feeds them to the transformer encoder for information flow. Finally, a transformer decoder tries to decode the text character-by-character using autoregressive loss. After two weeks of trying around different things, the training did not converge within the first hour - as this is the usual mark I use to validate if a model is learning or not.
-
Hacker News top posts: May 9, 2021
X-Transformers: A fully-featured transformer with experimental features\ (25 comments)
- X-Transformers: A fully-featured transformer with experimental features
-
[D] Theoretical papers on transformers? (or attention mechanism, or just seq2seq?)
One thing I’ve looked at is the fact that there’s no obvious reason to distinguish between W_K and W_Q in the formulation of a transformer as far as I can tell. However if you build a transformer where you merge the two matrices, it doesn’t learn as well. It still learns, but not as well. You can try out the code here. The training loss can be seen here, though we aborted the run because of how poorly it was doing.
perceiver-pytorch
-
/r/StableDiffusion – Mod here – My side of the story
I think that's confusing between:
1. Accusations that AUTOMATIC1111 (the web frontend developer) copied code from the NovelAI leak relating to the loading of hypernetworks
2. Anlatan (company behind NovelAI) copying code from AUTOMATIC1111's repo, which does not have a permissive license, relating to the weighting of words
The third party MIT-licensed code is relevant to #1. Some code AUTOMATIC1111 was accused of copying from the leak (https://i.imgur.com/r1AkvBG.png) actually already appears in multiple older permissively-licensed public repos (https://github.com/lucidrains/perceiver-pytorch/blame/main/p..., https://github.com/CompVis/stable-diffusion/blob/main/ldm/mo...), one of which was credited in the readme by AUTOMATICC1111.
For #2, the Anlatan CEO blamed it on an intern (https://i.imgur.com/BFjKG1V.png). The leak shows that the offending code was committed by the CEO (https://i.imgur.com/aLiA2tr.png), which doesn't necessarily rule it out originating from an intern (e.g: "send me the code over teams to review and I'll add it") but doesn't look great.
From other examples I'd say AUTOMATIC1111 did get a bit sloppy in terms of not following clean-room design regarding the leak, but I'm inclined to give some leeway to a solo developer making a hugely popular public tool for free.
-
[Hobby Scuffles] Week of October 10, 2022
Auto refuses to comply, and explains that the code he wrote is based upon research and development that was done quite some time ago already and is open-source. The function in question was published on December 21, 2021 here: https://github.com/CompVis/latent-diffusion/commit/e66308c7f2e64cb581c6d27ab6fbeb846828253b. But that is in fact still not the original source. The original source code was published on On August 3, 2021 here: https://github.com/lucidrains/perceiver-pytorch. The original code's license allows commercial use, so nobody is wrong for using it. The license can be read here: https://github.com/lucidrains/perceiver-pytorch/blob/main/LICENSE.
- Creators of AI Art generator exclude Automatic1111 when his work eclipses their own in popularity. Appearing to be moving to /r/sdforall (+2000 users in 8 hours). Automatic1111's windows install supports features unpopular with the original authors that are ultimately open source (sources below).
-
Show HN: InvokeAI, an open source Stable Diffusion toolkit and WebUI
This is the file in that other repo that the code actually seems to originate from https://github.com/lucidrains/perceiver-pytorch/blame/main/p...
As you can see, that repo from 2 years ago even originates the "# attention, what we cannot get enough of" comment and is an exact 1:1 match to Automatics commit, while the one from NovelAI even has a small change in the if clause that Automatic doesn't have.
-
AUTOMATIC111 Code reference
from the original repo, as posted by OP https://github.com/lucidrains/perceiver-pytorch/blob/main/LICENSE
-
Recent announcement from Emad
From a quick search, a big part of the other code also seems to be basic boilerplate? For example half the lines match exactly to https://github.com/lucidrains/perceiver-pytorch/blob/main/perceiver_pytorch/perceiver_pytorch.py
-
[D] Handling variable number of outputs in an NN
You want Perceiver IO (GitHub) (Paper). It's specifically intended to generate arbitrary shape outputs (and to accept arbitrary shape inputs).
What are some alternatives?
EasyOCR - Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.
tab-transformer-pytorch - Implementation of TabTransformer, attention network for tabular data, in Pytorch
TimeSformer-pytorch - Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification
ai-notes - notes for software engineers getting up to speed on new AI developments. Serves as datastore for https://latent.space writing, and product brainstorming, but has cleaned up canonical references under the /Resources folder.
flamingo-pytorch - Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
generation-q - A cross-platform desktop app with a nice interface to Stable Diffusion and others
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
slot-attention - Implementation of Slot Attention from GoogleAI
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
stable_diffusion.openvino