DeOldify
stylegan
Our great sponsors
DeOldify | stylegan | |
---|---|---|
58 | 31 | |
17,578 | 13,933 | |
- | 0.5% | |
2.7 | 0.0 | |
7 months ago | 13 days ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeOldify
- Would someone be able to restore this image of my grandpa in the marines? Color and non if possible!
- Help achieving a look from B&W to color film
- Is there a way to colorize images like this using controlNet and without morphing his face? If yes, does anyone know how?
- controlnet is great to bring back life to old picture. This was one of the most dangerous job in my country. Moving logs on a river
- ControlNet for Automatic1111 is here!
-
I improved the colorization of the NYC 1911 film by combining multiple colorization neural networks. Here is a brief comparison to the previous method. I will leave a link to the full video in the comments for those who are interested.
The old method is simple. All you need to do is running DeOldify with a colab notebook. Then you can pick any neural networks you want for upscale and frame interpolation. A lot of people use dain-app, but I find rife faster and easier to use.
-
80 Blog Posts to Learn Computer Vision
Today, we’re talking to a very special “Software Guy, currently digging deep into GANs” — The author of DeOldify: Jason Antic.
- Znalazłem ostatnio AI do restaurowania i kolorowania starych zdjęć i tym oto sposobem udało mi się odnowić jedyne zdjęcie mojego pradziadka o którym praktycznie nic nie wiem (poza tym, że został rozstrzelany za wojny i był niezły babiarz)! Załączę linki w komentarzu. :) Krawat w cętki wymiata. :D
- Anyone know of a (preferably local) batch-image colourisation software?
-
Retro personal computer ads from the 1980s
I think the novel and interesting tech is still happening, its just that without the colorful ads for it on TV, and without the software being packaged up and sold with pretty box art that you can physically hold, it doesn't feel as much like a capital-E Experience. It's probably the Internet's fault that we don't do things like that anymore, but the upside is that we now have access to so many ideas and applications from all over, even ones that aren't commercially viable.
Some that look exciting to me are: an AI that lets you animate still photos realistically [1], a simple website that guides you to discover new parks, eateries, and other places near you [2], an AI that colorizes old black-and-white photos/video [3], a Street View style map of the game world from "The Legend of Zelda: Breath of the Wild", with some 1st person 360 degree photos [4], and a tiny game engine that lets you distribute your whole game physically via printed QR codes [5].
If marketing and graphic design people ever felt like getting together to do some 'side projects', I vote that they should make print ads for apps/websites that they like :)
[1] https://github.com/AliaksandrSiarohin/first-order-model
[2] https://randomlocation.xyz (https://randomlocation.xyz/help.txt for customization)
[3] https://github.com/jantic/DeOldify
[4] https://nassimsoftware.github.io/zeldabotwstreetview/
[5] https://github.com/kesiev/rewtro
stylegan
-
An AI artist isn't an artist
Been following generative AI since 2017 when nvidia released their first GAN paper & the results always fascinated me. Trained my own models with their repo then experimented with other open source projects. went thru the pain of assembling my own data set, tweaking code parameters to achieve what i'm looking for, had to deal with all kinds of hardware/software issues. I know it's not easy. (screenshot of a motorbike GAN model i was training in 2018 https://imgur.com/a/SIULFhR, was taken after 5 hours of training on a gtx 1080) or this, cinema camera output from another locally trained model. So yeah i have a couple ideas of how generative AI works. yup things were that bad few years ago, that technology has come a long way. Using & setting up something like stable diffusion with automatic1111 webui isn't really a complex process. Though generating AI art locally is always gonna feel more rewarding than using a cloud based service.
-
Clearview AI scraped 30 billion images from Facebook and gave them to cops: it puts everyone into a 'perpetual police line-up'
Their algorithm is public, you could do it yourself if you have the proper hardware: https://github.com/NVlabs/stylegan
-
StyleGAN-T Nvidia, 30x Faster than SD?
Umm, StyleGAN was the first decent image generation model, and it was producing great images from random seeds 5 years ago. Now, that's with the obvious caveat that each model was trained to produce one specific type of image and it helped immensely if the training images were all aligned the same. Diffusion models are certainly the trendy current architecture for image generation, but AFAIK there's no fundamental theoretical limitation to the output quality of any architecture except the general rule that more parameters is better.
- The Concept Art Association updates their AI-restricting gofundme campaign, revealing their lack of AI understanding & nefarious plans! [detailed breakdown]
- This was taken outdoors with no special lighting
-
What the F**k
Jokes aside, ML moves extremely fast and our field is quickly advancing. The honest truth is that no researcher can even keep up other than their extremely niche corner. I'll show you an example. Here's what state of the art image generation looked like in 2014, 2018, and here is today (which now is highly controllable using text prompts instead of data prompts).
- Garfield
-
Teaching AI to Generate New Pokemon
The fundamental technology we will use in this work is a generative adversarial network. Specifically, the Style GAN variant.
-
A100 vs A6000 vs 3090 for computer vision and FP32/FP64
Based on my findings, we don't really need FP64 unless it's for certain medical applications. But The Best GPUs for Deep Learning in 2020 — An In-depth Analysis is suggesting A100 outperforms A6000 ~50% in DL. Also the Stylegan project GitHub - NVlabs/stylegan: StyleGAN - Official TensorFlow Implementation uses NVIDIA DGX-1 with 8 Tesla V100 16G(Fp32=15TFLOPS) to train dataset of high-res 1024*1024 images, I'm getting a bit uncertain if my specific tasks would require FP64 since my dataset is also high-res images. If not, can I assume A6000*5(total 120G) could provide similar results for StyleGan?
-
[D] Which gpu should I choose?
Yes that's what I thought. But StyleGan https://github.com/NVlabs/stylegan uses NVIDIA DGX-1 with 8 Tesla V100 16G GPUs(FP32=15) to do the training, not sure if it's related to its high-res training images or something else.
What are some alternatives?
sd-webui-controlnet - WebUI extension for ControlNet
pix2pix - Image-to-image translation with conditional adversarial nets
Real-ESRGAN-ncnn-vulkan - NCNN implementation of Real-ESRGAN. Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.
stylegan2 - StyleGAN2 - Official TensorFlow Implementation
ArtLine - A Deep Learning based project for creating line art portraits.
lucid-sonic-dreams
cnn-colorize - CNN Model to Colorize Grayscale Images
aphantasia - CLIP + FFT/DWT/RGB = text to image/video
colorize-photos - Colorize all the photos in a directory
ffhq-dataset - Flickr-Faces-HQ Dataset (FFHQ)
colorize - Colorize black and white photos.
awesome-pretrained-stylegan2 - A collection of pre-trained StyleGAN 2 models to download