ez-text2video
storyteller
ez-text2video | storyteller | |
---|---|---|
5 | 1 | |
94 | 474 | |
- | - | |
5.2 | 5.9 | |
12 months ago | 9 months ago | |
Python | Python | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ez-text2video
- ez-text2video: Run text-to-video model locally with custom video length, fps, and dimensions (works with just 4GB of VRAM).
- [P] ez-text2video: Easily run text-to-video diffusion model locally with custom video length, fps, and dimensions (works with just 4GB of VRAM).
- ez-text2video: Easily run text-to-video diffusion model locally with custom video length, fps, and dimensions (works with just 4GB of VRAM).
- ez-text2video: Run text-to-video locally with 4GB video cards, custom video length + fps, and adjustable video height/width
storyteller
What are some alternatives?
video-diffusion-pytorch - Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch
CogView - Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".
nuwa-pytorch - Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
Sketch-Guided-Stable-Diffusion - Unofficial Implementation of the Google Paper - https://sketch-guided-diffusion.github.io/
DALLE2-video - Direct application of DALLE-2 to video synthesis, using factored space-time Unet and Transformers
aphantasia - CLIP + FFT/DWT/RGB = text to image/video
phenaki-pytorch - Implementation of Phenaki Video, which uses Mask GIT to produce text guided videos of up to 2 minutes in length, in Pytorch
Gen-L-Video - The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".
make-a-video-pytorch - Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
stable-diffusion-docker - Run the official Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint.
Awesome-Video-Diffusion - A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
LLM-groundedDiffusion - LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models (LLM-grounded Diffusion: LMD)