x-transformers
pytorch-image-models
x-transformers | pytorch-image-models | |
---|---|---|
10 | 35 | |
4,147 | 29,828 | |
- | 1.2% | |
8.7 | 9.4 | |
2 days ago | 1 day ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
x-transformers
- x-transformers
- GPT-4 architecture: what we can deduce from research literature
- Doubt about transformers
-
The GPT Architecture, on a Napkin
it is all documented here, in writing and in code https://github.com/lucidrains/x-transformers
you will want to use rotary embeddings, if you do not need length extrapolation
-
[R] Deepmind's Gato: a generalist learning agent
it is just a single transformer encoder, so just use https://github.com/lucidrains/x-transformers with ff_glu set to True
-
[D] Transformer sequence generation - is it truly quadratic scaling?
However, I've come across the concept of Key, Value Caching in Transformer-Decoders recently (e.g. Figure 3 here), wherein because each output (and hence each input, since the model is autoregressive) only depends on previous outputs (inputs), we don't need to re-compute Key and Value vectors for all t < t_i at timestep i of the sequence. My intuition leads me to believe, then, that (unconditioned) inference for a decoder-only model uses an effective sequence length of 1 (the most recently produced token is the only real input that requires computation on), making Attention a linear-complexity operation. This thinking seems to be validated by this github issue, and this paper (2nd paragraph of Introduction).
-
[D] Sudden drop in loss after hours of no improvement - is this a thing?
The Project - Model: The primary architecture consists of a CNN with a transformer encoder and decoder. At first, I used my implementation of self-attention. Still, due to it not converging, I switched to using x-transformer implementation by lucidrains - as it includes improvements from many papers. The objective is simple; the CNN encoder converts images to a high-level representation; feeds them to the transformer encoder for information flow. Finally, a transformer decoder tries to decode the text character-by-character using autoregressive loss. After two weeks of trying around different things, the training did not converge within the first hour - as this is the usual mark I use to validate if a model is learning or not.
-
Hacker News top posts: May 9, 2021
X-Transformers: A fully-featured transformer with experimental features\ (25 comments)
- X-Transformers: A fully-featured transformer with experimental features
-
[D] Theoretical papers on transformers? (or attention mechanism, or just seq2seq?)
One thing I’ve looked at is the fact that there’s no obvious reason to distinguish between W_K and W_Q in the formulation of a transformer as far as I can tell. However if you build a transformer where you merge the two matrices, it doesn’t learn as well. It still learns, but not as well. You can try out the code here. The training loss can be seen here, though we aborted the run because of how poorly it was doing.
pytorch-image-models
- FLaNK AI Weekly 18 March 2024
-
[D] Hugging face and Timm
I am a PyTorch user I work in CV, I usually use the PyTorch models. However, I see people use timm in research papers to train their models I don't understand what it is timm is it a new framework like PyTorch? Further, when I click https://pypi.org/project/timm/ homepage it takes me to hugging face GitHub https://github.com/huggingface/pytorch-image-models is there any connection between timm and hugging face many of my friends use hugging face but I also don't know about hugging face I use simple PyTorch and torchvision.models.
-
FLaNK Stack Weekly for 07August2023
https://github.com/huggingface/pytorch-image-models https://huggingface.co/docs/timm/index
-
[R] Nvidia RTX 4090 ML benchmarks. Under QEMU/KVM. Image + Transformers. FP16/FP32.
pytorch-image-models
-
Inference on resent, cant work out the problem?
additionally, you might find the timm library handy for this sort of work.
-
Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows
This is still being pursued. Ross Wightmann's timm[0,1] package (now on Hugging Face) has done a lot of this. There's also a V2 of ConvNext[2]. Ross does write about this a lot on Twitter fwiw. I should also mention that there are still many transformer based networks that still beat convs. So there probably won't be a resurgence in convs until someone can show that there's a really strong reason for them. They have some advantages but they also might not be flexible enough for the long range tasks in segmentation and detection. But maybe they are.
FAIR definitely did great work with ConvNext, and I do hope to see more. There always needs to be people pushing unpopular paradigms.
[0] https://github.com/huggingface/pytorch-image-models
[1] https://arxiv.org/abs/2110.00476
[2] https://arxiv.org/abs/2301.00808
-
Problems with Learning Rate Finder in Pytorch Lightning
I am doing Binary classification with a pre-trained EfficientNet tf_efficientnet_l2. I froze all weights during training and replaced the classifier with a custom trainable one that looks like:
-
PyTorch at the Edge: Deploying Over 964 TIMM Models on Android with TorchScript and Flutter
In this post, I’m going to show you how you can pick from over 900+ SOTA models on TIMM, train them using best practices with Fastai, and deploy them on Android using Flutter.
-
ImageNet Advise
The other thing is, try to find tricks to speed up your experiments (if not having done so already). The most obvious are mixed precision training, have your model train on a lower resolution input first and then increase the resolution later in the training, stochastic depth, and a bunch more stuffs. Look for implementations in https://github.com/rwightman/pytorch-image-models .
- Doubt about transformers
What are some alternatives?
EasyOCR - Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.
yolov5 - YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
TimeSformer-pytorch - Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification
mmdetection - OpenMMLab Detection Toolbox and Benchmark
flamingo-pytorch - Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
detectron2 - Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
mmcv - OpenMMLab Computer Vision Foundation
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
segmentation_models.pytorch - Segmentation models with pretrained backbones. PyTorch.
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
yolact - A simple, fully convolutional model for real-time instance segmentation.