stylegan2
pix2pix
Our great sponsors
stylegan2 | pix2pix | |
---|---|---|
40 | 13 | |
10,753 | 9,859 | |
0.2% | - | |
0.0 | 0.0 | |
about 1 year ago | almost 3 years ago | |
Python | Lua | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stylegan2
-
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
I don't know. If you're really curious, you can just try it: https://github.com/NVlabs/stylegan2
-
Used thispersondoesnotexist.com, then expanded it with DALL-E
StyleGAN2 (Dec 2019) - Karras et al. and Nvidia
-
Show HN: Food Does Not Exist
> The denoising part of a denoising autoencoder refers to the noise applied to its input
Agree, it converts a noisy image to a denoised image. But the odd thing is, when you put a noisy image into a StyleGAN2 encoder, you get latents which the decoder will turn into a de-noised image. So in practical use, you can take a trained StyleGAN2 encoder/decoder pair and use it as if it was a denoiser.
> These differences lead to learned distributions in the latent space that are entirely different
I also agree there. The training for a denoising auto-encoder and for a GAN network is different, leading to different distributions which are sampled for generating the images. But the architecture is still very similar, meaning the limits of what can be learned should be the same.
> Beyond that the comparison just doesn't work, yes there are two networks but the discriminator doesn't play the role of the AE's encoder at all
Yes, the discriminator in a GAN won't work like an encoder. But if you look at how StyleGAN 1/2 are used in practice, people combine it with a so-called "projection", which is effectively an encoder to convert images to latents. So people use a pipeline of "image to latent encoder" + "latent to image decoder".
That whole pipeline is very similar to an auto-encoder. For example, here's an NVIDIA paper about how they round-trip from image to latent to image with StyleGAN: https://arxiv.org/abs/1912.04958 My interpretation of what they did in that paper is that they effectively trained a StyleGAN-like model with the image L2 loss typically used for training a denoising auto-encoder.
- "Why yes I totally believe the 'Xinjiang Police Files', they got photos of REAL (100% not AI generated) detainees!"
-
How did they code Viola AI (face to cartoon)
These problems are usually done with CNN Encoder-Decoder frameworks. Usually GAN (Generative Adversarial Networks see StyleGan2).
-
AI morphs many faces together to all sing Scatman
This is the result of two different models. The first looks like a latent space interpolation of StyleGan2 and the mouth movements are without a doubt from wav2lip.
-
What A.I. tool is this?
OP: if you want to run this at higher resolution, you should probably look at running it yourself, using something like this: https://github.com/NVlabs/stylegan2
-
Imagined ML model deployment on normal machine, is it possible?
StyleGAN2 (Dec 2019) - Karras et al. and Nvidia
-
I'm implementing StyleGAN2 with Keras. I was worried it wasn't working, but after some 300K training steps it's finally starting to converge. (+ plot of what the first (4x4) part looks like)
A few of you might've seen an earlier post of mine about this project (Or the repost that got more upvotes 🙃), and I've improved the code and network since then after more thoroughly reading and understanding the official StyleGAN2 implementation.
-
Is it just me or has Google Colab Pro become a lot more restrictive lately?
So I've been a Pro+ subscriber since around November which I mainly use to train GANs. I have multiple Google accounts, let's call them Account 1, 2, and 3. Accounts 1 and 2 are normal Google accounts and Account 3 is an account I got from my university after I graduated which has unlimited storage.
pix2pix
-
Any work on Style transfer using Stable Diffusion based on image-mask pairs similar to Pix2Pix
I have previously worked on retraining Pix2Pix GAN for image-to-image style transfer retrained with image-mask pairs. I expect Stable Diffusion to be better than Pix2Pix, but the problem sounds like something that should have been tackled already. I am familiar with text-based instructions for style transfer using SD (Instruct Pix2Pix), but retraining with image-mask pairs should provide better results. Does anyone know if anything that like exists already? Reference for Pix2Pix https://phillipi.github.io/pix2pix/
-
Hello nooby programmer here but i do 3d art and i have few questions
Additionally, there are some open source AI models for texture generation, such as pix2pix: https://phillipi.github.io/pix2pix/.
-
Predict Fourier spectrum
Sounds fun. Also sounds like image-to-image translation in 1D? Pix2pix is a famous implementation that uses UNet + adversarial loss: https://github.com/phillipi/pix2pix
- Things I Have Drawn is a site in which the things kids draw are real
-
Is it possible to remove a sticker from a photo?
I guess everything is rasterized correctly. While u/Kelaifu is correct: there are no softwares that can guess your face where it's missing, right now there are machine learning techniques and implementations (see: https://phillipi.github.io/pix2pix/) that can go pretty far with guesses.
-
Explore the water-logged city of Ys!
[Image generated using Machine Learning algorithm pix2pix trained by me using this dataset: CMP Facade Dataset]
-
[D] Geo DeepFakes are not far away
pix2pix can already transform images between satellite and maps (https://phillipi.github.io/pix2pix/). Will there be any difference if I use pix2pix to transform satellite images to maps and back?
-
Help with GAN Pix2Pix Code!
# Resize all images in a folder by 1/2 # modified from https://github.com/phillipi/pix2pix/blob/master/scripts/combine_A_and_B.py # resize_A_to_C.py from pdb import set_trace as st import os import numpy as np import cv2 import argparse parser = argparse.ArgumentParser("resize images by 1/2") parser.add_argument( "--fold_A", dest="fold_A", help="input directory for original images", type=str, default="./test_A", ) parser.add_argument( "--fold_C", dest="fold_C", help="output directory", type=str, default="./test_C" ) args = parser.parse_args() for arg in vars(args): print("[%s] = " % arg, getattr(args, arg)) img_list = os.listdir(args.fold_A) if not os.path.isdir(args.fold_C): os.makedirs(args.fold_C) for name_A in img_list: path_A = os.path.join(args.fold_A, name_A) if os.path.isfile(path_A) and not name_A.startswith( "." ): # skip . and .. and folders im_A = cv2.imread(path_A, cv2.IMREAD_COLOR) # Example: scale_down by factor 1/2 both dimensions scale_down = 0.5 im_C = cv2.resize( im_A, None, fx=scale_down, fy=scale_down, interpolation=cv2.INTER_LINEAR ) # store resized image with same name as in folder A path_C = os.path.join(args.fold_C, name_A) cv2.imwrite(path_C, im_C)
-
cGAN (img2img) Translation Applied to 3D Scene Post-Editing
This project uses the pix2pix Image translation architecture (https://phillipi.github.io/pix2pix/) for 3D image post-processing. The goal was to test the 3D applications of this type of architecture.
-
trained the model based on dark art sketches. got such bizarre forms of life
It seems like this even would make for a cool website, like pix2pix
What are some alternatives?
Wav2Lip - This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
stylegan - StyleGAN - Official TensorFlow Implementation
CycleGAN - Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
stylegan2-ada - StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation
naver-webtoon-faces - Generative models on NAVER Webtoon faces
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
awesome-image-translation - A collection of awesome resources image-to-image translation.
lightweight-gan - Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two
perspective-change - cGAN Based 3D Scene Re-Compositing
lucid-sonic-dreams
art-DCGAN - Modified implementation of DCGAN focused on generative art. Includes pre-trained models for landscapes, nude-portraits, and others.