storyteller
ez-text2video
storyteller | ez-text2video | |
---|---|---|
1 | 5 | |
475 | 92 | |
- | - | |
5.9 | 5.2 | |
9 months ago | 12 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
storyteller
ez-text2video
- ez-text2video: Run text-to-video model locally with custom video length, fps, and dimensions (works with just 4GB of VRAM).
- [P] ez-text2video: Easily run text-to-video diffusion model locally with custom video length, fps, and dimensions (works with just 4GB of VRAM).
- ez-text2video: Easily run text-to-video diffusion model locally with custom video length, fps, and dimensions (works with just 4GB of VRAM).
- ez-text2video: Run text-to-video locally with 4GB video cards, custom video length + fps, and adjustable video height/width
What are some alternatives?
CogView - Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".
video-diffusion-pytorch - Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch
Sketch-Guided-Stable-Diffusion - Unofficial Implementation of the Google Paper - https://sketch-guided-diffusion.github.io/
nuwa-pytorch - Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
aphantasia - CLIP + FFT/DWT/RGB = text to image/video
DALLE2-video - Direct application of DALLE-2 to video synthesis, using factored space-time Unet and Transformers
Gen-L-Video - The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".
phenaki-pytorch - Implementation of Phenaki Video, which uses Mask GIT to produce text guided videos of up to 2 minutes in length, in Pytorch
stable-diffusion-docker - Run the official Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint.
make-a-video-pytorch - Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
Awesome-Video-Diffusion - A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
LLM-groundedDiffusion - LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models (LLM-grounded Diffusion: LMD)