KandinskyVideo
VideoCrafter
KandinskyVideo | VideoCrafter | |
---|---|---|
1 | 6 | |
148 | 4,146 | |
4.7% | 4.6% | |
7.2 | 6.9 | |
about 1 month ago | 12 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
KandinskyVideo
VideoCrafter
- GitHub - AILab-CVC/VideoCrafter: VideoCrafter1: Open Diffusion Models for High-Quality Video Generation
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
Video Crafter (Generate 8-second vidoes using a text prompt) https://github.com/VideoCrafter/VideoCrafter (Video Crafter - GitHub) https://huggingface.co/VideoCrafter/t2v-version-1-1/tree/main/models (Video Crafter Model Checkpoints) -------UPSCALE--------
- Joe Biden vs. Shakira - VideoCrafter (Video2Video)
- VideoCrafter:a Toolkit for Text-to-Video Generation and Editing
- New 1.2B parameter text to video model is out: Latent Video Diffusion Models for High-Fidelity Long Video Generation
-
New 1.2B parameter text to video model is out, higher quality than modelscope
github: https://github.com/VideoCrafter/VideoCrafter
What are some alternatives?
storyteller - Multimodal AI Story Teller, built with Stable Diffusion, GPT, and neural text-to-speech
sd-webui-modelscope-text2video - Auto1111 extension consisting of implementation of text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies [Moved to: https://github.com/deforum-art/sd-webui-text2video]
Text-To-Video-Finetuning - Finetune ModelScope's Text To Video model using Diffusers 🧨
sd-webui-deforum - Deforum extension for AUTOMATIC1111's Stable Diffusion webui
stable-diffusion-webui-normalmap-script - Normal Maps for Stable Diffusion WebUI
sd-webui-text2video - Auto1111 extension implementing text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies
stable-diffusion - A latent text-to-image diffusion model
stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
ebsynth - Fast Example-based Image Synthesis and Style Transfer
Real-ESRGAN-ncnn-vulkan - NCNN implementation of Real-ESRGAN. Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.
Thin-Plate-Spline-Motion-Model - [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.