Text2LIVE
DeepSIM
Text2LIVE | DeepSIM | |
---|---|---|
2 | 3 | |
849 | 418 | |
- | - | |
0.0 | 1.8 | |
about 1 year ago | over 2 years ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Text2LIVE
-
The new neural network from NVIDIA can apply special effects to video using simple text commands.
The source code of the neural network can be found on GitHub: https://github.com/omerbt/Text2LIVE
-
Text2LIVE: Text-Driven Layered Image and Video Editing. A new zero shot technique to edit the appearances of images and video!
"We present a method for zero-shot, text-driven appearance manipulation in natural images and videos. Specifically, given an input image or video and a target text prompt, our goal is to edit the appearance of existing objects (e.g., object's texture) or augment the scene with new visual effects (e.g., smoke, fire) in a semantically meaningful manner. Our framework trains a generator using an internal dataset of training examples, extracted from a single input (image or video and target text prompt), while leveraging an external pre-trained CLIP model to establish our losses. Rather than directly generating the edited output, our key idea is to generate an edit layer (color+opacity) that is composited over the original input. This allows us to constrain the generation process and maintain high fidelity to the original input via novel text-driven losses that are applied directly to the edit layer. Our method neither relies on a pre-trained generator nor requires user-provided edit masks. Thus, it can perform localized, semantic edits on high-resolution natural images and videos across a variety of objects and scenes. Semi-Transparent Effects Text2LIVE successfully augments the input scene with complex semi-transparent effects without changing irrelevant content in the image." demo site: https://text2live.github.io arxiv: https://arxiv.org/abs/2204.02491 github: https://github.com/omerbt/Text2LIVE
DeepSIM
-
[P] I made FaceShop! Instance segmentation + CGAN for editing faces (badly)
Pix2PixHD (from DeepSIM)
Uses a mix of instance segmentation (BiSeNet) and conditional GAN, and is heavily inspired by the Pix2PixHD and DeepSIM papers. Will have more details when I wake up!
-
Israeli Researchers Unveil DeepSIM, a Neural Generative Model for Conditional Image Manipulation Based on a Single Image
5 Min Read | Paper | Project | Code
What are some alternatives?
Paint-by-Sketch - Stable Diffusion-based image manipulation method with a sketch and reference image
anycost-gan - [CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
SDEdit - PyTorch implementation for SDEdit: Image Synthesis and Editing with Stochastic Differential Equations
ml-gmpi - Official PyTorch implementation of GMPI (ECCV 2022, Oral Presentation)
pytorch-CycleGAN-and-pix2pix - Image-to-Image Translation in PyTorch
TargetCLIP - [ECCV 2022] Official PyTorch implementation of the paper Image-Based CLIP-Guided Essence Transfer.
pix2pixHD - Synthesizing and manipulating 2048x1024 images with conditional GANs
autodistill-metaclip - MetaCLIP module for use with Autodistill.
face-parsing.PyTorch - Using modified BiSeNet for face parsing in PyTorch
sketchedit - SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches, CVPR2022
stylegan3-editing - Official Implementation of "Third Time's the Charm? Image and Video Editing with StyleGAN3" (AIM ECCVW 2022) https://arxiv.org/abs/2201.13433