Deep-Exemplar-based-Video-Colorization
CycleGAN
Our great sponsors
Deep-Exemplar-based-Video-Colorization | CycleGAN | |
---|---|---|
4 | 2 | |
317 | 12,132 | |
- | - | |
0.0 | 2.5 | |
over 1 year ago | 7 months ago | |
Python | Lua | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Deep-Exemplar-based-Video-Colorization
- [Machine Learning] [R] colorisation vidéo basée sur un exemple
-
Show HN: I made a new AI colorizer
Thanks! yeah, a few people have used the tool to colorize videos, frame by frame. For example Lord of the flies (1963): https://www.dailymotion.com/video/x8eiho4
Although, I'd recommend colorizing a few key frames and then use https://github.com/zhangmozhe/Deep-Exemplar-based-Video-Colo...
Cool, yeah, my next model will be better for comic books. You can also use the 'Surprise Me' button in the editor and you'll get some decent results.
- 1929 video from Shanghai, upscaled to 4K color using AI
-
[P] Colorizing the legacy videos with attention mechanism
We recently released the code for our paper "Deep Exemplar-based Video Colorization". The code along with the Colab demo is available at: https://github.com/zhangmozhe/Deep-Exemplar-based-Video-Colorization. Welcome to have a try.
CycleGAN
-
good computer vision or deep learning projects in github
CycleGAN (GitHub: https://github.com/junyanz/CycleGAN) is a deep learning-based image-to-image translation approach without paired examples, implemented in PyTorch.
-
AI will take over all the jobs
It's image translation, check this out https://github.com/junyanz/CycleGAN
What are some alternatives?
Few-Shot-Patch-Based-Training - The official implementation of our SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training
pix2pix - Image-to-image translation with conditional adversarial nets
mmagic - OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄: Generative-AI (AIGC), easy-to-use APIs, awsome model zoo, diffusion models, for text-to-image generation, image/video restoration/enhancement, etc.
pix2pixHD - Synthesizing and manipulating 2048x1024 images with conditional GANs
OASIS - Official implementation of the paper "You Only Need Adversarial Supervision for Semantic Image Synthesis" (ICLR 2021)
pytorch-CycleGAN-and-pix2pix - Image-to-Image Translation in PyTorch
TraVeLGAN_with_perceptual_loss - The implementation code of Thesis project which entitled "Photo-to-Emoji Transformation with TraVeLGAN and Perceptual Loss" as a final project in my master study.
contrastive-unpaired-translation - Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
HyperGAN - Composable GAN framework with api and user interface
faceswap-GAN - A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.
ArtGAN - ArtGAN + WikiArt: This work presents a series of new approaches to improve GAN for conditional image synthesis and we name the proposed model as “ArtGAN”.
anycost-gan - [CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing