ArcaneGAN
JoJoGAN
ArcaneGAN | JoJoGAN | |
---|---|---|
11 | 11 | |
652 | 1,400 | |
- | - | |
0.0 | 0.0 | |
4 months ago | over 1 year ago | |
Jupyter Notebook | ||
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ArcaneGAN
- Jedna za hredit. StableDiffusion + GFPGAN (rekonstrukcija lica) + ArcaneGAN (stilizirani izgled). Workflow u komentaru :)
-
A bath in the swamp, StableDiffusion 1.4 + GFPGAN (face restoration) + ArcaneGAN (stylized look)
It's a GAN trained on shots from Netflix' Arcane animated series. You can download the model here.
- [NO SPOILERS] Voice actors in Arcane style
- I turned this old cellphone photo of my wife into a digital painting. The original photo was taken on our vacation in Italy, almost 20 years ago, with a Sony Ericsson K310. StableDiffusion + ArcaneGAN.
-
Is there a AI which is able to edit images to make them look drawn?
Here are some models that do that with face images that I have tried out: AnimeGANv3 - this one just came out ArcaneGAN - for faces JoJoGAN - for faces
-
ArcaneGAN: Face Portrait to Arcane Style
github: https://github.com/Sxela/ArcaneGAN
-
Found a neural network that transforms portraits into Arcane style. If Joey won't watch Arcane, I'll put him IN it.
Source code: ArcaneGAN by Alex Spirin
-
[P] ArcaneGAN: face portrait to Arcane style
It's not mine, as I've stated in this comment. My repo is here - https://github.com/Sxela/ArcaneGAN
-
Candidatos presidenciales como personajes de Arcane
Generé estas imágenes utilizando ArcaneGAN, creada por Alex Spirin.
JoJoGAN
-
Can anyone tell me what type of model can do this?
I've tried style transfer and some GANs like this one: https://github.com/mchong6/JoJoGAN
-
✨ Best Computer Vision Projects with Source Code 🚀
🔗 https://github.com/mchong6/JoJoGAN
-
Does any anybody know how to write a dataloader script for JoJoGAN training?
And I quite liked and wanted to train this model with my own dataset, and always fell into the same error CUDA Out of Memory again and again. After searching across the internet for answers ended up finding it -> here.
-
In A Latest Computer Vision Research, Researchers Introduce ‘JoJoGAN’: An AI Method With One-Shot Face Stylization
Code for https://arxiv.org/abs/2112.11641 found: https://github.com/mchong6/JoJoGAN
-
Style Transfer from multiple style sources?
Since you want to use multiple images for style, it reminded me of this pipeline. In it you can take a dataset of style images and finetune a pretrained model for 500-1000 iterations to achieve the style from these images on new ones. I am not sure if that pipeline is what you need or not though, because it is built for faces in particular, but maybe you can take inspiration from their approach for a generic method.
-
Is there a AI which is able to edit images to make them look drawn?
Here are some models that do that with face images that I have tried out: AnimeGANv3 - this one just came out ArcaneGAN - for faces JoJoGAN - for faces
- Official PyTorch repo for JoJoGAN: One Shot Face Stylization
- JoJoGAN: One Shot Face Stylization
-
[R] JoJoGAN: One Shot Face Stylization
github: https://github.com/mchong6/JoJoGAN
What are some alternatives?
AnimeGANv3 - Use AnimeGANv3 to make your own animation works, including turning photos or videos into anime.
toonify
gan-vae-pretrained-pytorch - Pretrained GANs + VAEs + classifiers for MNIST/CIFAR in pytorch.
stable-diffusion - A latent text-to-image diffusion model
GFPGAN - GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.
AnimeGANv2 - [Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime
stable-diffusion-webui - Stable Diffusion web UI
pixel2style2pixel - Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
articulated-animation - Code for Motion Representations for Articulated Animation paper
InstColorization