AI-Art
VQGAN-CLIP-Video
AI-Art | VQGAN-CLIP-Video | |
---|---|---|
1 | 1 | |
379 | 22 | |
- | - | |
0.0 | 1.8 | |
about 2 years ago | about 2 years ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AI-Art
VQGAN-CLIP-Video
What are some alternatives?
cycle-gan-pytorch - This repository contains an implementation of the Cylce-GAN architecture for style transfer along with instructions to train on an own dataset.
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
pytorch-CycleGAN-and-pix2pix - Image-to-Image Translation in PyTorch
optical.flow.demo - A project that uses optical flow and machine learning to detect aimhacking in video clips.
neural-style-pt - PyTorch implementation of neural style transfer algorithm
vqgan-clip-app - Local image generation using VQGAN-CLIP or CLIP guided diffusion
pytorch-neural-style-transfer - Reconstruction of the original paper on neural style transfer (Gatys et al.). I've additionally included reconstruction scripts which allow you to reconstruct only the content or the style of the image - for better understanding of how NST works.
feed_forward_vqgan_clip - Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
Neural-Style-Transfer - Pytorch implementation of Nueral Style transfer
moviepy - Video editing with Python