The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →
X-transformers Alternatives
Similar projects and alternatives to x-transformers
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
EasyOCR
Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.
-
pytorch-image-models
PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
-
minGPT
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
TimeSformer-pytorch
Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification
-
flamingo-pytorch
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
-
memory-efficient-attention-pytorch
Discontinued Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
-
DALLE-pytorch
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
-
PaLM-pytorch
Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways
-
perceiver-pytorch
Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
x-transformers reviews and mentions
- x-transformers
- GPT-4 architecture: what we can deduce from research literature
- Doubt about transformers
-
The GPT Architecture, on a Napkin
it is all documented here, in writing and in code https://github.com/lucidrains/x-transformers
you will want to use rotary embeddings, if you do not need length extrapolation
-
[R] Deepmind's Gato: a generalist learning agent
it is just a single transformer encoder, so just use https://github.com/lucidrains/x-transformers with ff_glu set to True
-
[D] Transformer sequence generation - is it truly quadratic scaling?
However, I've come across the concept of Key, Value Caching in Transformer-Decoders recently (e.g. Figure 3 here), wherein because each output (and hence each input, since the model is autoregressive) only depends on previous outputs (inputs), we don't need to re-compute Key and Value vectors for all t < t_i at timestep i of the sequence. My intuition leads me to believe, then, that (unconditioned) inference for a decoder-only model uses an effective sequence length of 1 (the most recently produced token is the only real input that requires computation on), making Attention a linear-complexity operation. This thinking seems to be validated by this github issue, and this paper (2nd paragraph of Introduction).
-
[D] Sudden drop in loss after hours of no improvement - is this a thing?
The Project - Model: The primary architecture consists of a CNN with a transformer encoder and decoder. At first, I used my implementation of self-attention. Still, due to it not converging, I switched to using x-transformer implementation by lucidrains - as it includes improvements from many papers. The objective is simple; the CNN encoder converts images to a high-level representation; feeds them to the transformer encoder for information flow. Finally, a transformer decoder tries to decode the text character-by-character using autoregressive loss. After two weeks of trying around different things, the training did not converge within the first hour - as this is the usual mark I use to validate if a model is learning or not.
-
Hacker News top posts: May 9, 2021
X-Transformers: A fully-featured transformer with experimental features\ (25 comments)
- X-Transformers: A fully-featured transformer with experimental features
-
[D] Theoretical papers on transformers? (or attention mechanism, or just seq2seq?)
One thing I’ve looked at is the fact that there’s no obvious reason to distinguish between W_K and W_Q in the formulation of a transformer as far as I can tell. However if you build a transformer where you merge the two matrices, it doesn’t learn as well. It still learns, but not as well. You can try out the code here. The training loss can be seen here, though we aborted the run because of how poorly it was doing.
-
A note from our sponsor - WorkOS
workos.com | 19 Apr 2024
Stats
lucidrains/x-transformers is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of x-transformers is Python.
Popular Comparisons
- x-transformers VS EasyOCR
- x-transformers VS TimeSformer-pytorch
- x-transformers VS flamingo-pytorch
- x-transformers VS memory-efficient-attention-pytorch
- x-transformers VS DALLE-pytorch
- x-transformers VS performer-pytorch
- x-transformers VS SpecBAS
- x-transformers VS PaLM-pytorch
- x-transformers VS perceiver-pytorch
- x-transformers VS euporie