ez-text2video
video-diffusion-pytorch
ez-text2video | video-diffusion-pytorch | |
---|---|---|
5 | 1 | |
94 | 1,136 | |
- | - | |
5.2 | 4.6 | |
12 months ago | 20 days ago | |
Python | Python | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ez-text2video
- ez-text2video: Run text-to-video model locally with custom video length, fps, and dimensions (works with just 4GB of VRAM).
- [P] ez-text2video: Easily run text-to-video diffusion model locally with custom video length, fps, and dimensions (works with just 4GB of VRAM).
- ez-text2video: Easily run text-to-video diffusion model locally with custom video length, fps, and dimensions (works with just 4GB of VRAM).
- ez-text2video: Run text-to-video locally with 4GB video cards, custom video length + fps, and adjustable video height/width
video-diffusion-pytorch
-
Anime Video Generation using diffusion as in Imagen
Trainer: https://github.com/lucidrains/video-diffusion-pytorch
What are some alternatives?
storyteller - Multimodal AI Story Teller, built with Stable Diffusion, GPT, and neural text-to-speech
make-a-video-pytorch - Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
nuwa-pytorch - Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
Awesome-Video-Diffusion - A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
DALLE2-video - Direct application of DALLE-2 to video synthesis, using factored space-time Unet and Transformers
ReuseAndDiffuse - Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation
phenaki-pytorch - Implementation of Phenaki Video, which uses Mask GIT to produce text guided videos of up to 2 minutes in length, in Pytorch
video-diffusion-pytorch - Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch
DDPM_inversion - Official pytorch implementation of the paper: "An Edit Friendly DDPM Noise Space: Inversion and Manipulations". CVPR 2024.