Few-Shot-Patch-Based-Training
pix2pixHD
Few-Shot-Patch-Based-Training | pix2pixHD | |
---|---|---|
5 | 6 | |
603 | 6,530 | |
- | 0.5% | |
1.8 | 0.0 | |
about 3 years ago | 11 months ago | |
C++ | Python | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Few-Shot-Patch-Based-Training
-
To the people who use SD to apply different styles to videos
and here is the Code and weights: https://github.com/OndrejTexler/Few-Shot-Patch-Based-Training
-
Another CN test! Sorry for the swedish!
PS: https://github.com/OndrejTexler/Few-Shot-Patch-Based-Training in case you havent taken a look, works like wonders.
- InstructPix2Pix Video: "Turn the wave into trash"
-
A quick demonstration of how I accomplished this animation.
Then why did you limit yourself in exactly the ways I described, by using the appropriate tools meant for video? Because it looked like shit until you pulled out ebsynth, right? Try this. It'll look even better and you won't have to deal with janky manual keyframe interpolation. That's the difference the right tool makes.
-
[R] Few-Shot Patch-Based Training (Siggraph 2020) - Dr. Ondřej Texler - Link to free zoom lecture by the author in comments
Interactive Video Stylization Using Few-Shot Patch-Based Training (Siggraph 2020) Project page: https://ondrejtexler.github.io/patch-based_training/index.html Git: https://github.com/OndrejTexler/Few-Shot-Patch-Based-Training
pix2pixHD
- How do I run more than 200 epochs in training a Pix2PixHD model?
-
NVIDIA DLSS Now Available in Over 150 Games, Including Dying Light 2 Stay Human, Sifu and Phantasy Star Online 2 New Genesis
Well, maybe not, considering things like pix2pix can generate detail from just solid shapes and colors.
-
Image to hand drawn
Sources: U2Net, ArtLine, Pix2PixHD, APDrawingGAN
-
[P] I made FaceShop! Instance segmentation + CGAN for editing faces (badly)
Pix2PixHD (from DeepSIM)
Uses a mix of instance segmentation (BiSeNet) and conditional GAN, and is heavily inspired by the Pix2PixHD and DeepSIM papers. Will have more details when I wake up!
-
How to access a class object when I use torch.nn.DataParallel()?
I used Pix2PixHD implementation in GitHub if you want to see the full code.
What are some alternatives?
Deep-Exemplar-based-Video-Colorization - The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".
pytorch-CycleGAN-and-pix2pix - Image-to-Image Translation in PyTorch
iSeeBetter - iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
awesome-colab-notebooks - Collection of google colaboratory notebooks for fast and easy experiments
Deep-Image-Analogy - The source code of 'Visual Attribute Transfer through Deep Image Analogy'.
sofgan - [TOG 2022] SofGAN: A Portrait Image Generator with Dynamic Styling
BlendGAN - Official PyTorch implementation of "BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation" (NeurIPS 2021)
face-parsing.PyTorch - Using modified BiSeNet for face parsing in PyTorch
ganspace - Discovering Interpretable GAN Controls [NeurIPS 2020]
generative-inpainting-pytorch - A PyTorch reimplementation for paper Generative Image Inpainting with Contextual Attention (https://arxiv.org/abs/1801.07892)
image_edit - Demos of neural image editing
contrastive-unpaired-translation - Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)