Few-Shot-Patch-Based-Training
Deep-Image-Analogy
Few-Shot-Patch-Based-Training | Deep-Image-Analogy | |
---|---|---|
5 | 1 | |
603 | 1,367 | |
- | 0.0% | |
1.8 | 0.0 | |
about 3 years ago | over 2 years ago | |
C++ | C++ | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Few-Shot-Patch-Based-Training
-
To the people who use SD to apply different styles to videos
and here is the Code and weights: https://github.com/OndrejTexler/Few-Shot-Patch-Based-Training
-
Another CN test! Sorry for the swedish!
PS: https://github.com/OndrejTexler/Few-Shot-Patch-Based-Training in case you havent taken a look, works like wonders.
- InstructPix2Pix Video: "Turn the wave into trash"
-
A quick demonstration of how I accomplished this animation.
Then why did you limit yourself in exactly the ways I described, by using the appropriate tools meant for video? Because it looked like shit until you pulled out ebsynth, right? Try this. It'll look even better and you won't have to deal with janky manual keyframe interpolation. That's the difference the right tool makes.
-
[R] Few-Shot Patch-Based Training (Siggraph 2020) - Dr. Ondřej Texler - Link to free zoom lecture by the author in comments
Interactive Video Stylization Using Few-Shot Patch-Based Training (Siggraph 2020) Project page: https://ondrejtexler.github.io/patch-based_training/index.html Git: https://github.com/OndrejTexler/Few-Shot-Patch-Based-Training
Deep-Image-Analogy
-
Anyone who likes machine learning is an immoral rat
(1) i don't think there's much of a point in making formal comparisons between machine learning and human intelligence. it can be amusing to remark on machine learning applications producing human-like outputs, like artistic style transfer with image analogy. but ultimately, neural networks are just statistical models trained to provide particular outputs given a particular type of inputs. the framework certainly allows for a lot more flexibility and expressiveness than statistical models we have used in the past, but in those specific contexts, it is simply nothing compared to human intelligence and probably never will be.
What are some alternatives?
Deep-Exemplar-based-Video-Colorization - The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".
fast-artistic-videos - Video style transfer using feed-forward networks.
iSeeBetter - iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
tensorflow - An Open Source Machine Learning Framework for Everyone
BlendGAN - Official PyTorch implementation of "BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation" (NeurIPS 2021)
OpenCV - Open Source Computer Vision Library
ganspace - Discovering Interpretable GAN Controls [NeurIPS 2020]
pix2pixHD - Synthesizing and manipulating 2048x1024 images with conditional GANs
image_edit - Demos of neural image editing
pytorch-CycleGAN-and-pix2pix - Image-to-Image Translation in PyTorch
pix2pix - Image-to-image translation with conditional adversarial nets
CogVideo - Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"