-
maua-stylegan2
This is the repo for my experiments with StyleGAN2. There are many like it, but this one is mine. Contains code for the paper Audio-reactive Latent Interpolations with StyleGAN.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
stylegan2-surgery
StyleGAN2 fork with scripts and convenience modifications for creative media synthesis
You don't even need to really do model surgery. All the convolutions will accept arbitrary dimensions. You can just use network bending padding operations to get any output size you like Vadim Epstein's repo does something slightly different which let's you even use different latents per section: https://github.com/eps696/stylegan2ada Or mine which has the simpler, single latent version https://github.com/JCBrouwer/maua-stylegan2 Or for training, then all you have to do is change the size of your constant layer Or just graft on some more upsamples Either way though, there's not too much point to training at weird rectangular resolutions. You'll get pretty much identical results by just forcefully resizing to a square and then stretching the generated versions back out to square Unless you've got a ridiculous amount of VRAM, larger models don't really make too much sense either. Especially because it'll be hard to find 10k images at such a big resolution
You don't even need to really do model surgery. All the convolutions will accept arbitrary dimensions. You can just use network bending padding operations to get any output size you like Vadim Epstein's repo does something slightly different which let's you even use different latents per section: https://github.com/eps696/stylegan2ada Or mine which has the simpler, single latent version https://github.com/JCBrouwer/maua-stylegan2 Or for training, then all you have to do is change the size of your constant layer Or just graft on some more upsamples Either way though, there's not too much point to training at weird rectangular resolutions. You'll get pretty much identical results by just forcefully resizing to a square and then stretching the generated versions back out to square Unless you've got a ridiculous amount of VRAM, larger models don't really make too much sense either. Especially because it'll be hard to find 10k images at such a big resolution