StyleCLIP
NVAE
Our great sponsors
StyleCLIP | NVAE | |
---|---|---|
23 | 3 | |
3,889 | 958 | |
- | 2.9% | |
0.0 | 0.0 | |
11 months ago | over 1 year ago | |
HTML | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
StyleCLIP
-
A History of CLIP Model Training Data Advances
While CLIP on its own is useful for applications such as zero-shot classification, semantic searches, and unsupervised data exploration, CLIP is also used as a building block in a vast array of multimodal applications, from Stable Diffusion and DALL-E to StyleCLIP and OWL-ViT. For most of these downstream applications, the initial CLIP model is regarded as a “pre-trained” starting point, and the entire model is fine-tuned for its new use case.
-
[D] What is the largest / most diverse GAN model currently out there?
I'm currently building a fork for StyleCLIP global directions which allows you to control multiple semantic parameters simoultaneously to generate and edit an image with StyleGAN and CLIP in realtime. I want to showcase its potential as a design tool. Unfortunately, GAN weights are trained on very domain-specific (faces, cars, churches) data. This makes them inferior to modern diffusion models which I can use to generate whatever comes to mind. Although I know we won't have a GAN-based DALL-E counterpart anytime soon, I still would love to use my system with weights that can output a wide variety of things.
-
test
(Added Feb. 15, 2021) StyleCLIP - Colaboratory by orpatashnik. Uses StyleGAN to generate images. GitHub. Twitter reference. Reddit post.
- I am David Bau, and I study the structure of the complex computations learned within deep neural networks.
-
Dragon Age Origins Companions as Photorealistic People.
I used StyleCLIP. I purchased some Google Colab time to use their GPUs. I'll probably do some more later this week.
- Turning BDO characters into blursed people with AI
-
I used AI to generate real life for honor character faces
Link for Styleclip
-
AI-generated 'real' faces of CGI characters - description in comments
So, I watched this Corridor Crew video on generating realistic faces from CG characters, and I wanted to try it out on the RDR2 models. The github link for the original work is here. If you guys are interested I can generate the faces of more characters from RDR2 and RDR1. I can even try some from RD Revolver.
-
AI Generated Art Scene Explodes as Hackers Create Groundbreaking New Tools - New AI tools CLIP+VQ-GAN can create impressive works of art based on just a few words of input.
Combining these methods with CLIP allows you to generate images based on text. This one uses a face generator. https://github.com/orpatashnik/StyleCLIP
- [D] How to save latent code edited from StyleClip.
NVAE
-
[R] Looking for papers which are modified variational autoencoder (VAE)
NVAE: A Deep Hierarchical Variational Autoencoder
-
Alias-Free GAN
non-commercially.
It's a great example of the difference between open source (which it is) and free software which it is not. So we're back to square one where it is probably best to clean-room the implementation from the paper, which is nearly useless to reproduce the model.
What are some alternatives?
encoder4editing - Official implementation of "Designing an Encoder for StyleGAN Image Manipulation" (SIGGRAPH 2021) https://arxiv.org/abs/2102.02766
alias-free-gan - Alias-Free GAN project website and code
compare_gan - Compare GAN code.
tensor2tensor - Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
pixel2style2pixel - Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
Story2Hallucination
aphantasia - CLIP + FFT/DWT/RGB = text to image/video
CLIP-Style-Transfer - Doing style transfer with linguistic features using OpenAI's CLIP.
stylegan-xl - [SIGGRAPH'22] StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets