-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
stylegan2-surgery
StyleGAN2 fork with scripts and convenience modifications for creative media synthesis
I trained a Generative Adversarial Network stylegan2-ada model with NVIDIA’s StyleGAN3 algorithm with a RTX 3090 GPU for a few nights. This is an interpolation video of the generator model’s random-walk datapoints divided into 4 different windows. Initially I web crawled some 2000 images of any Formidable images and cropped them with nagadomi’s lbpcascade_animeface anime face detector, with a setting that I attempted to also include her assets in the image. Previously I have done by transfer-learning from Gwern’s ThisWaifuDoesNotExist, which only included heads of Emilia from Re:Zero, which was quite good. This time I wanted to see if the model can also handle having something more than just a head. Having Formidable’s chest also in the image made some angles perform pretty bad, as there are as many ways of making anatomy as there are artists. Because of this, I removed all swimsuit and party skin images, as making her features was hard enough with her default skin, making the final dataset size some 1500 images. In the end, I’m pretty satisfied with the results, but I could prune the dataset even more and crop the images more homogenously as well as try a bit different hyperparameters (most importantly gamma) and stylegan3-t. However, I want to move into trying out Stable Diffusion model, so I will wrap this project up at least for now and post this. There is a psi hyperparameter used in this video generation, that determines how “creative” the generator might be, i.e. how far from an optimal statistical distribution it can go at any given datapoint (video time in this case). With psi=0 you have almost static video, and with psi=1 wildly varying results of which half aren’t even recognizable, and some are really good. I settled for 0.65, which I think has some nice variety with a reasonable amount of bad morphs.
I trained a Generative Adversarial Network stylegan2-ada model with NVIDIA’s StyleGAN3 algorithm with a RTX 3090 GPU for a few nights. This is an interpolation video of the generator model’s random-walk datapoints divided into 4 different windows. Initially I web crawled some 2000 images of any Formidable images and cropped them with nagadomi’s lbpcascade_animeface anime face detector, with a setting that I attempted to also include her assets in the image. Previously I have done by transfer-learning from Gwern’s ThisWaifuDoesNotExist, which only included heads of Emilia from Re:Zero, which was quite good. This time I wanted to see if the model can also handle having something more than just a head. Having Formidable’s chest also in the image made some angles perform pretty bad, as there are as many ways of making anatomy as there are artists. Because of this, I removed all swimsuit and party skin images, as making her features was hard enough with her default skin, making the final dataset size some 1500 images. In the end, I’m pretty satisfied with the results, but I could prune the dataset even more and crop the images more homogenously as well as try a bit different hyperparameters (most importantly gamma) and stylegan3-t. However, I want to move into trying out Stable Diffusion model, so I will wrap this project up at least for now and post this. There is a psi hyperparameter used in this video generation, that determines how “creative” the generator might be, i.e. how far from an optimal statistical distribution it can go at any given datapoint (video time in this case). With psi=0 you have almost static video, and with psi=1 wildly varying results of which half aren’t even recognizable, and some are really good. I settled for 0.65, which I think has some nice variety with a reasonable amount of bad morphs.
Thanks! From scratch it takes from days to a few weeks with a powerful GPU, but if one transfer-learns from an existing model (like ThisWaifuDoesNotExistv3) it can be done in a few hours-days and some additional hours for finishing touches (lowering learning rate, clearing dataset with the discriminator). Most of the active work goes into collecting and preparing the data, with some spent with manual evaluation for best model checkpoints. To my understanding there are very few attempts to generate more than an anime head with these, and I'm not sure how much stylegan would need to be modified for that. There is this project which I have heard might work with full-body generation, but I have no experience with it.