pix2pix
perspective-change
Our great sponsors
pix2pix | perspective-change | |
---|---|---|
13 | 1 | |
9,859 | 0 | |
- | - | |
0.0 | 0.0 | |
almost 3 years ago | almost 3 years ago | |
Lua | Python | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pix2pix
-
Any work on Style transfer using Stable Diffusion based on image-mask pairs similar to Pix2Pix
I have previously worked on retraining Pix2Pix GAN for image-to-image style transfer retrained with image-mask pairs. I expect Stable Diffusion to be better than Pix2Pix, but the problem sounds like something that should have been tackled already. I am familiar with text-based instructions for style transfer using SD (Instruct Pix2Pix), but retraining with image-mask pairs should provide better results. Does anyone know if anything that like exists already? Reference for Pix2Pix https://phillipi.github.io/pix2pix/
-
Hello nooby programmer here but i do 3d art and i have few questions
Additionally, there are some open source AI models for texture generation, such as pix2pix: https://phillipi.github.io/pix2pix/.
-
Predict Fourier spectrum
Sounds fun. Also sounds like image-to-image translation in 1D? Pix2pix is a famous implementation that uses UNet + adversarial loss: https://github.com/phillipi/pix2pix
- Things I Have Drawn is a site in which the things kids draw are real
-
Is it possible to remove a sticker from a photo?
I guess everything is rasterized correctly. While u/Kelaifu is correct: there are no softwares that can guess your face where it's missing, right now there are machine learning techniques and implementations (see: https://phillipi.github.io/pix2pix/) that can go pretty far with guesses.
-
Explore the water-logged city of Ys!
[Image generated using Machine Learning algorithm pix2pix trained by me using this dataset: CMP Facade Dataset]
-
[D] Geo DeepFakes are not far away
pix2pix can already transform images between satellite and maps (https://phillipi.github.io/pix2pix/). Will there be any difference if I use pix2pix to transform satellite images to maps and back?
-
Help with GAN Pix2Pix Code!
# Resize all images in a folder by 1/2 # modified from https://github.com/phillipi/pix2pix/blob/master/scripts/combine_A_and_B.py # resize_A_to_C.py from pdb import set_trace as st import os import numpy as np import cv2 import argparse parser = argparse.ArgumentParser("resize images by 1/2") parser.add_argument( "--fold_A", dest="fold_A", help="input directory for original images", type=str, default="./test_A", ) parser.add_argument( "--fold_C", dest="fold_C", help="output directory", type=str, default="./test_C" ) args = parser.parse_args() for arg in vars(args): print("[%s] = " % arg, getattr(args, arg)) img_list = os.listdir(args.fold_A) if not os.path.isdir(args.fold_C): os.makedirs(args.fold_C) for name_A in img_list: path_A = os.path.join(args.fold_A, name_A) if os.path.isfile(path_A) and not name_A.startswith( "." ): # skip . and .. and folders im_A = cv2.imread(path_A, cv2.IMREAD_COLOR) # Example: scale_down by factor 1/2 both dimensions scale_down = 0.5 im_C = cv2.resize( im_A, None, fx=scale_down, fy=scale_down, interpolation=cv2.INTER_LINEAR ) # store resized image with same name as in folder A path_C = os.path.join(args.fold_C, name_A) cv2.imwrite(path_C, im_C)
-
cGAN (img2img) Translation Applied to 3D Scene Post-Editing
This project uses the pix2pix Image translation architecture (https://phillipi.github.io/pix2pix/) for 3D image post-processing. The goal was to test the 3D applications of this type of architecture.
-
trained the model based on dark art sketches. got such bizarre forms of life
It seems like this even would make for a cool website, like pix2pix
perspective-change
-
cGAN (img2img) Translation Applied to 3D Scene Post-Editing
- Image 1, Scene Re-composition: Takes an image of simple 3D scene, and then shows what that scene would look like from another angle (30 Degree Offset) (https://github.com/ronan-pickell/perspective-change)
What are some alternatives?
stylegan - StyleGAN - Official TensorFlow Implementation
CycleGAN - Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
stylegan2 - StyleGAN2 - Official TensorFlow Implementation
naver-webtoon-faces - Generative models on NAVER Webtoon faces
awesome-image-translation - A collection of awesome resources image-to-image translation.
art-DCGAN - Modified implementation of DCGAN focused on generative art. Includes pre-trained models for landscapes, nude-portraits, and others.
dataset-tools - Tools for quickly normalizing image datasets
Few-Shot-Patch-Based-Training - The official implementation of our SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training
pix2pixHD - Synthesizing and manipulating 2048x1024 images with conditional GANs
Faces2Anime - Faces2Anime: Cartoon Style Transfer in Faces using Generative Adversarial Networks. Masters Thesis 2021 @ NTUST.