pix2pix
Few-Shot-Patch-Based-Training
Our great sponsors
pix2pix | Few-Shot-Patch-Based-Training | |
---|---|---|
13 | 5 | |
9,859 | 603 | |
- | - | |
0.0 | 1.8 | |
almost 3 years ago | about 3 years ago | |
Lua | C++ | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pix2pix
-
Any work on Style transfer using Stable Diffusion based on image-mask pairs similar to Pix2Pix
I have previously worked on retraining Pix2Pix GAN for image-to-image style transfer retrained with image-mask pairs. I expect Stable Diffusion to be better than Pix2Pix, but the problem sounds like something that should have been tackled already. I am familiar with text-based instructions for style transfer using SD (Instruct Pix2Pix), but retraining with image-mask pairs should provide better results. Does anyone know if anything that like exists already? Reference for Pix2Pix https://phillipi.github.io/pix2pix/
-
Hello nooby programmer here but i do 3d art and i have few questions
Additionally, there are some open source AI models for texture generation, such as pix2pix: https://phillipi.github.io/pix2pix/.
-
Predict Fourier spectrum
Sounds fun. Also sounds like image-to-image translation in 1D? Pix2pix is a famous implementation that uses UNet + adversarial loss: https://github.com/phillipi/pix2pix
- Things I Have Drawn is a site in which the things kids draw are real
-
Is it possible to remove a sticker from a photo?
I guess everything is rasterized correctly. While u/Kelaifu is correct: there are no softwares that can guess your face where it's missing, right now there are machine learning techniques and implementations (see: https://phillipi.github.io/pix2pix/) that can go pretty far with guesses.
-
Explore the water-logged city of Ys!
[Image generated using Machine Learning algorithm pix2pix trained by me using this dataset: CMP Facade Dataset]
-
[D] Geo DeepFakes are not far away
pix2pix can already transform images between satellite and maps (https://phillipi.github.io/pix2pix/). Will there be any difference if I use pix2pix to transform satellite images to maps and back?
-
Help with GAN Pix2Pix Code!
# Resize all images in a folder by 1/2 # modified from https://github.com/phillipi/pix2pix/blob/master/scripts/combine_A_and_B.py # resize_A_to_C.py from pdb import set_trace as st import os import numpy as np import cv2 import argparse parser = argparse.ArgumentParser("resize images by 1/2") parser.add_argument( "--fold_A", dest="fold_A", help="input directory for original images", type=str, default="./test_A", ) parser.add_argument( "--fold_C", dest="fold_C", help="output directory", type=str, default="./test_C" ) args = parser.parse_args() for arg in vars(args): print("[%s] = " % arg, getattr(args, arg)) img_list = os.listdir(args.fold_A) if not os.path.isdir(args.fold_C): os.makedirs(args.fold_C) for name_A in img_list: path_A = os.path.join(args.fold_A, name_A) if os.path.isfile(path_A) and not name_A.startswith( "." ): # skip . and .. and folders im_A = cv2.imread(path_A, cv2.IMREAD_COLOR) # Example: scale_down by factor 1/2 both dimensions scale_down = 0.5 im_C = cv2.resize( im_A, None, fx=scale_down, fy=scale_down, interpolation=cv2.INTER_LINEAR ) # store resized image with same name as in folder A path_C = os.path.join(args.fold_C, name_A) cv2.imwrite(path_C, im_C)
-
cGAN (img2img) Translation Applied to 3D Scene Post-Editing
This project uses the pix2pix Image translation architecture (https://phillipi.github.io/pix2pix/) for 3D image post-processing. The goal was to test the 3D applications of this type of architecture.
-
trained the model based on dark art sketches. got such bizarre forms of life
It seems like this even would make for a cool website, like pix2pix
Few-Shot-Patch-Based-Training
-
To the people who use SD to apply different styles to videos
and here is the Code and weights: https://github.com/OndrejTexler/Few-Shot-Patch-Based-Training
-
Another CN test! Sorry for the swedish!
PS: https://github.com/OndrejTexler/Few-Shot-Patch-Based-Training in case you havent taken a look, works like wonders.
- InstructPix2Pix Video: "Turn the wave into trash"
-
A quick demonstration of how I accomplished this animation.
Then why did you limit yourself in exactly the ways I described, by using the appropriate tools meant for video? Because it looked like shit until you pulled out ebsynth, right? Try this. It'll look even better and you won't have to deal with janky manual keyframe interpolation. That's the difference the right tool makes.
-
[R] Few-Shot Patch-Based Training (Siggraph 2020) - Dr. Ondřej Texler - Link to free zoom lecture by the author in comments
Interactive Video Stylization Using Few-Shot Patch-Based Training (Siggraph 2020) Project page: https://ondrejtexler.github.io/patch-based_training/index.html Git: https://github.com/OndrejTexler/Few-Shot-Patch-Based-Training
What are some alternatives?
stylegan - StyleGAN - Official TensorFlow Implementation
Deep-Exemplar-based-Video-Colorization - The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".
CycleGAN - Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
iSeeBetter - iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
stylegan2 - StyleGAN2 - Official TensorFlow Implementation
Deep-Image-Analogy - The source code of 'Visual Attribute Transfer through Deep Image Analogy'.
naver-webtoon-faces - Generative models on NAVER Webtoon faces
BlendGAN - Official PyTorch implementation of "BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation" (NeurIPS 2021)
awesome-image-translation - A collection of awesome resources image-to-image translation.
ganspace - Discovering Interpretable GAN Controls [NeurIPS 2020]
art-DCGAN - Modified implementation of DCGAN focused on generative art. Includes pre-trained models for landscapes, nude-portraits, and others.
pix2pixHD - Synthesizing and manipulating 2048x1024 images with conditional GANs