alias-free-gan
StyleCLIP
Our great sponsors
alias-free-gan | StyleCLIP | |
---|---|---|
3 | 23 | |
1,320 | 3,863 | |
0.0% | - | |
1.8 | 0.0 | |
over 2 years ago | 10 months ago | |
HTML | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
alias-free-gan
- When is the alias-free GAN code going to be released?
-
Anime Alias-Free GAN Interpolation
Curious how you managed to make this since the code hasn't been released yet https://github.com/NVlabs/alias-free-gan Did you write it from the research paper, if so do you have a github link?
-
Alias-Free GAN
This isn't true. I do ML every day. You are mistaken.
I click the website. I search "model". I see two results. Oh no, that means no download link to model.
I go to the github. Maybe model download link is there. I see zero code: https://github.com/NVlabs/alias-free-gan
zero code. Zero model.
You, and everyone like you, who are gushing with praise and hypnotized by pretty images and a nice-looking pdf, are doing damage by saying that this is correct and normal.
The thing that's useful to me, first and foremost, is a model. Code alone isn't useful.
Code, however, is the recipe to create the model. It might take 400 hours on a V100, and it might not actually result in the model being created, but it slightly helps me.
There is no code here.
Do you think that the pdf is helpful? Yeah, maybe. But I'm starting to suspect that the pdf is in fact a tech demo for nVidia, not a scientific contribution whose purpose is to be helpful to people like me.
Okay? Model first. Code second. Paper third.
Every time a tech demo like this comes out, I'd like you to check that those things exist, in that order. If it doesn't, it's not reproducible science. It's a tech demo.
I need to write something about this somewhere, because a large number of people seem to be caught in this spell. You're definitely not alone, and I'm sorry for sounding like I was singling you out. I just loaded up the comment section, saw your comment, thought "Oh, awesome!" clicked through, and went "Oh no..."
StyleCLIP
-
A History of CLIP Model Training Data Advances
While CLIP on its own is useful for applications such as zero-shot classification, semantic searches, and unsupervised data exploration, CLIP is also used as a building block in a vast array of multimodal applications, from Stable Diffusion and DALL-E to StyleCLIP and OWL-ViT. For most of these downstream applications, the initial CLIP model is regarded as a “pre-trained” starting point, and the entire model is fine-tuned for its new use case.
-
[D] What is the largest / most diverse GAN model currently out there?
I'm currently building a fork for StyleCLIP global directions which allows you to control multiple semantic parameters simoultaneously to generate and edit an image with StyleGAN and CLIP in realtime. I want to showcase its potential as a design tool. Unfortunately, GAN weights are trained on very domain-specific (faces, cars, churches) data. This makes them inferior to modern diffusion models which I can use to generate whatever comes to mind. Although I know we won't have a GAN-based DALL-E counterpart anytime soon, I still would love to use my system with weights that can output a wide variety of things.
-
test
(Added Feb. 15, 2021) StyleCLIP - Colaboratory by orpatashnik. Uses StyleGAN to generate images. GitHub. Twitter reference. Reddit post.
-
I used AI to generate real life for honor character faces
Link for Styleclip
-
AI Generated Art Scene Explodes as Hackers Create Groundbreaking New Tools - New AI tools CLIP+VQ-GAN can create impressive works of art based on just a few words of input.
Combining these methods with CLIP allows you to generate images based on text. This one uses a face generator. https://github.com/orpatashnik/StyleCLIP
-
Alias-Free GAN
The first two demo videoes are interesting examples of using StyleCLIP's global directions to guide an image toward a "smiling face" as noted in that paper with smooth interpolation: https://github.com/orpatashnik/StyleCLIP
I had ran a few chaotic experiments with StyleCLIP a few months ago which would work very well with smooth interpolation: https://minimaxir.com/2021/04/styleclip/
-
[D] StyleGAN2 + CLIP = StyleCLIP: You Describe & AI Photoshops Faces For You
Official GitHub
Yes, using a custom image requires changing a few lines of code (which the OP also did in their Notebook variant but did not cite that issue, heh).
-
Edit a human face image with text-to-image using Google Colab notebook StyleCLIP by orpatashnik. 3 transformations shown. Details in a comment.
How to invert and edit an image
The Google Colab notebook is StyleCLIP. GitHub. Twitter reference.
What are some alternatives?
encoder4editing - Official implementation of "Designing an Encoder for StyleGAN Image Manipulation" (SIGGRAPH 2021) https://arxiv.org/abs/2102.02766
compare_gan - Compare GAN code.
NVAE - The Official PyTorch Implementation of "NVAE: A Deep Hierarchical Variational Autoencoder" (NeurIPS 2020 spotlight paper)
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
pixel2style2pixel - Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
tensor2tensor - Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Story2Hallucination
CLIP-Style-Transfer - Doing style transfer with linguistic features using OpenAI's CLIP.
aphantasia - CLIP + FFT/DWT/RGB = text to image/video
stylegan-xl - [SIGGRAPH'22] StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets