tensor2tensor
DISCONTINUED
StyleCLIP
Our great sponsors
tensor2tensor | StyleCLIP | |
---|---|---|
8 | 23 | |
13,873 | 3,863 | |
- | - | |
6.2 | 0.0 | |
10 months ago | 10 months ago | |
Python | HTML | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tensor2tensor
-
[P] Why the Original Transformer Figure Is Wrong, And Some Other Interesting Tidbits
It's an interesting question. The original and official code used Post-LN. But then, after uploading the preprint, they changed it to Pre-LN via this PR in Aug 2017: https://github.com/tensorflow/tensor2tensor/commit/f5c9b17e617ea9179b7d84d36b1e8162cb369f25
The code we used to train and evaluate our models is available at https://github.com/tensorflow/tensor2tensor.
- Why the Original Transformer LLM Figure Is Wrong, and Other Interesting Tidbits
-
What Are Transformer Models and How Do They Work?
The visualisation here may be helpful.
-
[P] Why I quit my lucrative job at Google to start Vectara? (neural search as a service for developers everywhere).
Found relevant code at https://github.com/tensorflow/tensor2tensor + all code implementations here
-
[D] Resources for Understanding The Original Transformer Paper
Code for https://arxiv.org/abs/1706.03762 found: https://github.com/tensorflow/tensor2tensor
-
Alias-Free GAN
Roughly every assumption you've stated is mistaken.
I would say that your view here, is what I thought ML would be when I got in. If I had your faith in the process still, I would be saying the same things you're saying here.
The reason I'm saying the exact opposite, to ensure what you've said becomes the norm.
Let's go through your points. I'll address each of them in detail.
Think of playing pool at a bar together with your coworker. You've been on the job for some years; they just got their Github credentials, and are eager to get started.
While you're playing pool, your friend starts trying to convince you of something you know isn't true. What do you do? You listen, chat, and keep playing pool.
Your theory is that they'll learn on the job that what they're saying makes no sense, so, your best bet for now is to relax and keep playing pool.
You're trying to convince me of your position. Unfortunately, based on the things that you've been saying, it indicates you haven't had a lot of experience doing what you're proposing. If you had, you'd be saying something approximate to what I'm saying now. Which of us should change their minds?
I probably should. I spent two years trying to convince myself that none of what I was saying was true.
That's called "gaslighting": https://en.wikipedia.org/wiki/Gaslighting
I was reluctant to call explicit attention to that word, since I really was trying to chill with you and just talk.
But if you're trying to understand why I was stressed, it's because I really felt that many of the papers I tried to reproduce, use for my own purposes, or integrate into my bag of tricks, were claiming things that I'd say are mistaken knowledge.
You seem to be under the impression that, when Karras releases the contribution, that the science will be verified.
The science doesn't get verified. Karras is already working on the next thing.
The verification here is that the videos clearly show the results. That's nice. That gives me a target to aim for, if I wanted to try Karras' method.
But it doesn't help me verify Karras' claims. Firstly, there's no way to know whether I've achieved "success," or something approximate to success. Maybe my model is mostly right, but there was some curious quirk during training that made it slightly different. I'm not worried about that case though.
The real problem is that there aren't any tools when things go wrong. When I try to reproduce a paper by reading it, there's nothing to help you. Your position is "just be smarter." My position is, "I've tried. Many times."
Either I'm very stupid, or the paper seems to be mistaken.
That's how I end up feeling after most of the papers I tried to replicate. Many of these replication attempts lasted weeks. My longest replication lasted over a year, before I found the crucial bug preventing me from doing exactly what you want me to do. (BatchNorm layers were initialized to 0 instead of 1 in BigGan-Deep's "unofficial but closest we have" codebase: https://github.com/google/compare_gan/issues/54)
If you haven't had this experience, you will. The only reason you're saying the things you're saying, is because you haven't spent a lot of time trying. I feel this in the core of my being; if it's mistaken, please, I'd love to know.
Let's start with a simple example.
https://github.com/tensorflow/tensor2tensor/pull/1883
Now, here's a model in tensor2tensor, a pretty popular lib. I explain in the PR why this model was "catastrophically broken, from the beginning, but may have seemed like it worked."
I would say that many, many ML papers have such an error.
So when you're saying "reproduce the model," you mean "reproduce the errors in their code," if the code isn't available. Which it isn't here, until September. Therefore, it's not a scientific contribution until September.
Now, from what I hear, your position seems to be that in September, the science will happen. That's true. The science may happen, because Karras.
Most of us don't learn from Karras. Karras is impactful. But there's a whole long tail of followers that try to follow Karras' example. And those often don't release models: https://news.ycombinator.com/item?id=27127198
The reply goes into detail about, why aren't model's released? Is it less frequent now than it was before? But my point is, in that case -- that thread -- I believe science wasn't happening. Do you agree?
If we don't agree on that point, then I feel there's a fundamental difference of opinion. We'll have to agree to disagree, or persuade each other to change our minds. If you want more examples, I can give you many.
My contention is that if I tried to reproduce the model in that thread -- which I did, successfully, with BigGAN-Deep in Tensorflow -- it will take me over a year. Which it did, for BigGAN-Deep.
Your feelings are, well yes, but the paper gave you some useful ideas to use.
I'm saying, the code didn't work at all. The model was the only thing that saved me. I reverse engineered the DeepMind public model release, including the tensorflow graph, looking for the difference between that model and the training code I was using.
The final fix, was to change 0 to 1.
The model worked.
Either I am very stupid, or we're in territory where a certain scientific rigor is warranted.
The reason that I'm speaking my mind now, here, on a Karras' release, is because most releases aren't Karras' quality. They leave parts out of the process, like Karras is doing here. Sometimes they come later. Most of the time, they don't.
Now. As someone who has done what I've described above, for two years straight, which of my assumptions feel mistaken?
StyleCLIP
-
A History of CLIP Model Training Data Advances
While CLIP on its own is useful for applications such as zero-shot classification, semantic searches, and unsupervised data exploration, CLIP is also used as a building block in a vast array of multimodal applications, from Stable Diffusion and DALL-E to StyleCLIP and OWL-ViT. For most of these downstream applications, the initial CLIP model is regarded as a “pre-trained” starting point, and the entire model is fine-tuned for its new use case.
-
[D] What is the largest / most diverse GAN model currently out there?
I'm currently building a fork for StyleCLIP global directions which allows you to control multiple semantic parameters simoultaneously to generate and edit an image with StyleGAN and CLIP in realtime. I want to showcase its potential as a design tool. Unfortunately, GAN weights are trained on very domain-specific (faces, cars, churches) data. This makes them inferior to modern diffusion models which I can use to generate whatever comes to mind. Although I know we won't have a GAN-based DALL-E counterpart anytime soon, I still would love to use my system with weights that can output a wide variety of things.
-
test
(Added Feb. 15, 2021) StyleCLIP - Colaboratory by orpatashnik. Uses StyleGAN to generate images. GitHub. Twitter reference. Reddit post.
-
I used AI to generate real life for honor character faces
Link for Styleclip
-
AI Generated Art Scene Explodes as Hackers Create Groundbreaking New Tools - New AI tools CLIP+VQ-GAN can create impressive works of art based on just a few words of input.
Combining these methods with CLIP allows you to generate images based on text. This one uses a face generator. https://github.com/orpatashnik/StyleCLIP
-
Alias-Free GAN
The first two demo videoes are interesting examples of using StyleCLIP's global directions to guide an image toward a "smiling face" as noted in that paper with smooth interpolation: https://github.com/orpatashnik/StyleCLIP
I had ran a few chaotic experiments with StyleCLIP a few months ago which would work very well with smooth interpolation: https://minimaxir.com/2021/04/styleclip/
-
[D] StyleGAN2 + CLIP = StyleCLIP: You Describe & AI Photoshops Faces For You
Official GitHub
Yes, using a custom image requires changing a few lines of code (which the OP also did in their Notebook variant but did not cite that issue, heh).
-
Edit a human face image with text-to-image using Google Colab notebook StyleCLIP by orpatashnik. 3 transformations shown. Details in a comment.
How to invert and edit an image
The Google Colab notebook is StyleCLIP. GitHub. Twitter reference.
What are some alternatives?
encoder4editing - Official implementation of "Designing an Encoder for StyleGAN Image Manipulation" (SIGGRAPH 2021) https://arxiv.org/abs/2102.02766
pytorch-seq2seq - Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
compare_gan - Compare GAN code.
OpenNMT-py - Open Source Neural Machine Translation and (Large) Language Models in PyTorch
NVAE - The Official PyTorch Implementation of "NVAE: A Deep Hierarchical Variational Autoencoder" (NeurIPS 2020 spotlight paper)
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
Deep-Learning-Papers-Reading-Roadmap - Deep Learning papers reading roadmap for anyone who are eager to learn this amazing tech!
OPUS-MT-train - Training open neural machine translation models
pixel2style2pixel - Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
alias-free-gan - Alias-Free GAN project website and code
Seq2seq-PyTorch