Swin-Transformer
latent-diffusion
Swin-Transformer | latent-diffusion | |
---|---|---|
23 | 70 | |
13,002 | 10,622 | |
1.7% | 2.8% | |
2.8 | 0.0 | |
24 days ago | 2 months ago | |
Python | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Swin-Transformer
-
Samsung expected to report 80% profit plunge as losses mount at chip business
> there is really nothing that "normal" AI requires that is bound to CUDA. pyTorch and Tensorflow are backend agnostic (ideally...).
There are a lot of optimizations that CUDA has that are nowhere near supported in other software or even hardware. Custom cuda kernels also aren't as rare as one might think, they will often just be hidden unless you're looking at libraries. Our more well known example is going to be StyleGAN[0] but it isn't uncommon to see elsewhere, even in research code. Swin even has a cuda kernel[1]. Or find torch here[1] (which github reports that 4% of the code is cuda (and 42% C++ and 2% C)). These things are everywhere. I don't think pytorch and tensorflow could ever be agnostic, there will always be a difference just because you have to spend resources differently (developing kernels is time resource). We can draw evidence by looking at Intel MKL, which is still better than open source libraries and has been so for a long time.
I really do want AMD to compete in this space. I'd even love a third player like Intel. We really do need competition here, but it would be naive to think that there's going to be a quick catchup here. AMD has a lot of work to do and posting a few bounties and starting a company (idk, called "micro grad"?) isn't going to solve the problem anytime soon.
And fwiw, I'm willing to bet that most AI companies would rather run in house servers than from cloud service providers. The truth is that right now just publishing is extremely correlated to compute infrastructure (doesn't need to be but with all the noise we've just said "fuck the poor" because rejecting is easy) and anyone building products has costly infrastructure.
[0] https://github.com/NVlabs/stylegan2-ada-pytorch/blob/d72cc7d...
[1] https://github.com/microsoft/Swin-Transformer/blob/2cb103f2d...
[2] https://github.com/pytorch/pytorch/tree/main/aten/src
- Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
- Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows
-
[D] Influential papers round-up 2022. What are your favorites?
ConvNeXt. The A ConvNet for the 2020s paper is a highlight for me because the authors were able to design a purely convolutional architecture that outperformed popular vision transformers such as Swin Transformer (and all convolutional neural networks that came before it, of course).
-
[R] LiBai: a large-scale open-source model training toolbox
Found relevant code at https://github.com/microsoft/Swin-Transformer + all code implementations here
-
Using VIT as a feature extractor
Figures aside, you can reform the image from the tokens if you want. This is what's done in SWIN transformers (https://arxiv.org/abs/2103.14030) patches are tokenized, transformed, and then re-assembled into an image-like tensor. The patchification is shifted at every other transformer stage so that there is more information that propagates from one patch to the next.
-
Pathways Autoregressive Text-to-Image Model (Parti)
Give it a few days and lucidrains will have the code up[0].
But in honesty, it is probably how people react. We saw this with Pulse, GPT, and many others. The authors are clear about the limitations but people talk it up too much and others shit on it. There's also a reproducibility crisis in ML (many famous networks, like Swin[1][2][3], can't be reproduced (even worse when reviewers concentrate on benchmarks)). It isn't like many can train a model like this anyways. It gives them benefit of the doubt and maintains good publicity rather than controversial.
Of course, this is extremely bad from an academic perspective and personally I believe you should have your paper revoked if it isn't reproducible. You'd be surprised how many don't track the random seed or measure variance. We have GitHub. You should be able to write training options that get approximately the same results as the paper. Otherwise I don't trust your results.
[0] https://github.com/lucidrains/parti-pytorch
[1] https://github.com/microsoft/Swin-Transformer/issues/183
[2] https://github.com/microsoft/Swin-Transformer/issues/180
[3] https://github.com/microsoft/Swin-Transformer/issues/148
-
[D] What do you value in a paper replication?
That's about it. I should be able to go to your code and hit run, and reproduce your results (or within the reported variance). If you don't meet any of these criteria them I'm going to be pretty upset and lose a lot of respect for your work. I think we should also put pressure on these papers if they don't meet these conditions, especially if they are pushing the benchmarks (I'm looking at you Swin). If you win on benchmarks due to silicon lottery, then we shouldn't be trusting you.
latent-diffusion
-
SDXL: The next generation of Stable Diffusion models for text-to-image synthesis
Stable Diffusion XL (SDXL) is the latest text-to-image generation model developed by Stability AI, based on the latent diffusion techniques. SDXL has the potential to create highly realistic images for media, entertainment, education, and industry domains, opening new ways in practical uses of AI imagery.
-
Is it possible to create a checkpoint from scratch?
Here's a link to the early latent-diffusion git, that might be able to create a blank model (I haven't tested it): https://github.com/CompVis/latent-diffusion
-
Anything better than pix2pixHD?
Latent diffusion could work for you: https://github.com/CompVis/latent-diffusion (https://arxiv.org/abs/2112.10752)
-
Image Upscaler AI
There are a lot but the one implemented as LDSR in most stable guis is this one. https://github.com/CompVis/latent-diffusion
-
I've been collecting millions of images of only public domain /cc0 licensing. I'd like to train a stable diffusion model on the collection. Could some one share their knowledge of what this would take? Otherwise, simply enjoy my library.
CompVis/latent-diffusion: High-Resolution Image Synthesis with Latent Diffusion Models (github.com)
-
Run Clip on iPhone to Search Photos
The "retrieval based model" refers to https://github.com/CompVis/latent-diffusion#retrieval-augmen..., which uses ScaNN to train a knn embedding searcher.
-
Class Action Lawsuit filed against Stable Diffusion and Midjourney.
Stability is basically https://github.com/CompVis/latent-diffusion + training data.
-
[D] Influential papers round-up 2022. What are your favorites?
Found relevant code at https://github.com/CompVis/latent-diffusion + all code implementations here
-
Can anyone explain differences between sampling methods and their uses to me in simple terms, because all the info I've found so far is either very contradicting or complex and goes over my head
DDIM and PLMS were the original samplers. They were part of Latent Diffusion's repository. They stand for the papers that introduced them, Denoising Diffusion Implicit Models and Pseudo Numerical Methods for Diffusion Models on Manifolds.
-
AI art is very dystopian.
yes, https://github.com/CompVis/latent-diffusion
What are some alternatives?
Swin-Transformer-Tensorflow - Unofficial implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" (https://arxiv.org/abs/2103.14030)
disco-diffusion
parti-pytorch - Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorch
dalle-mini - DALLĀ·E Mini - Generate images from a text prompt
Video-Swin-Transformer - This is an official implementation for "Video Swin Transformers".
hent-AI - Automation of censor bar detection
pytorch-image-models - PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
dalle-2-preview
ConvNeXt - Code release for ConvNeXt model
stable-diffusion
semantic-segmentation-pytorch - Pytorch implementation for Semantic Segmentation/Scene Parsing on MIT ADE20K dataset
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch