storyteller
Gen-L-Video
storyteller | Gen-L-Video | |
---|---|---|
1 | 1 | |
475 | 262 | |
- | - | |
5.9 | 7.7 | |
9 months ago | 4 months ago | |
Python | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
storyteller
Gen-L-Video
What are some alternatives?
ez-text2video - Easily run text-to-video diffusion with customized video length, fps, and dimensions on 4GB video cards or on CPU.
LAMP - Official implement code of LAMP: Learn a Motion Pattern by Few-Shot Tuning a Text-to-Image Diffusion Model (Few-shot-based text-to-video diffusion)
CogView - Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".
Awesome-Video-Diffusion - A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
Sketch-Guided-Stable-Diffusion - Unofficial Implementation of the Google Paper - https://sketch-guided-diffusion.github.io/
Wuerstchen - Official implementation of Würstchen: Efficient Pretraining of Text-to-Image Models
aphantasia - CLIP + FFT/DWT/RGB = text to image/video
anima - Turn text into video using Stable Diffusion and Google FILM
stable-diffusion-docker - Run the official Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint.
MultiDiffusion - Official Pytorch Implementation for "MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation" presenting "MultiDiffusion" (ICML 2023)
LLM-groundedDiffusion - LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models (LLM-grounded Diffusion: LMD)