Text-To-Video-Finetuning
sd-webui-text2video
Text-To-Video-Finetuning | sd-webui-text2video | |
---|---|---|
19 | 29 | |
507 | 1,259 | |
- | 2.5% | |
10.0 | 9.0 | |
6 months ago | 5 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Text-To-Video-Finetuning
-
Announcing zeroscope_v2_XL: a new 1024x576 video model based on Modelscope
I used this repo for the finetuning: https://github.com/ExponentialML/Text-To-Video-Finetuning
- Inspired by u/Many-Ad-6225's Mortal Kombat remastering post, test of a Liu Kang animation x4 upscale (ModelScope vid2vid)
- Text-to-Video Model Fine-Tuned with 512x512 Anime-Style for Diffusers
- How do you custom train Modelscope?
-
ModelScope Finetuning
Has anyone successfully done this? I walked through the steps and did not find what I wanted so wanting to know if anyone has a tutorial about fine-tuning Modelscope with https://github.com/ExponentialML/Text-To-Video-Finetuning
-
What will happen once AI is capable of letting 1 person make a whole Hollywood-quality film?
Well, actually today all I have is ModelScope txt2video, SadTalker and an understanding of how this technology works, but pretty soon I'll have this https://github.com/ExponentialML/Text-To-Video-Finetuning/pull/27 too. Then whatever advancements things like https://ai.facebook.com/blog/dino-v2-computer-vision-self-supervised-learning/ unlock will filter down to me too and so on it will go. My understanding of the tech will continue to deepen as I continue to retrain from traditional software engineering to machine learning. Things like Culitho (https://www.anandtech.com/show/18792/nvidias-culitho-to-speed-up-computational-lithography-for-2nm-and-beyond) and AlphaTensor (https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor) will continue to make compute faster and more affordable driving the cost of training/inference down and massively increasing the accessibility. Increasingly more functions will continue to be approximated closer and closer (https://www.youtube.com/watch?v=0QczhVg5HaI).
-
Animov-0.1 โ High-resolution anime fine-tune of ModelScope text2video is now available in Auto1111! Trained on 384x384 anime fragments by strangeman3107, makes 2 seconds long videos with only 8.6G of VRAM (16 frames at 8 fps)
Made by strangeman3107 via https://github.com/ExponentialML/Text-To-Video-Finetuning. The original Diffusers weights https://huggingface.co/datasets/strangeman3107/animov-0.1
Just as one of Deforum Discord's server members linked me it, I was so inspired that I quickly wrote the Diffusers->pth (ModelScope original format) conversion script
-
Auto1111 text2video Major Update! Animate pictures and loop videos with inpainting keyframes. 125 frames (8 secs) video now takes only 12gbs of VRAM thanks to torch2 optimization. WebAPI is released, no delay between runs! (ModelScope)
Yes, there's a Diffusers based repo https://github.com/ExponentialML/Text-To-Video-Finetuning.
- sd-webui-text2video has been updated and now it works with Xformers
sd-webui-text2video
- Fat heroes
- SDXL ๐ค RealisticVision3 working together
- Testing Zeroscope v2 Text-to-Video using vid2vid
- zeroscope_v2_XL: a new open source 1024x576 video model designed to take on Gen-2
-
Fresh Pasta of Bel-Air
Link to Txt2Video extension: https://github.com/kabachuha/sd-webui-text2video
- WELCOME TO OLLIVANDER'S. Overriding my usual bad footage (& voiceover), The head, hands & clothes were created separately in detail in stable diffusion using my temporal consistency technique and then merged back together. The background was also Ai, animated using a created depthmap.
-
surfs up, poodles ! text to video, Modelscope
Thanks! I'm using a Touchdesigner setup + UI I've built that uses the api in https://github.com/kabachuha/sd-webui-text2video for automatic1111.
- How to Text 2 video?
- First Open-Source 1024x576 Text To Video Model (potat1) is out!
- "Acid Rain" (ModelScope text2video / Zeroscope 320x) [4K]
What are some alternatives?
sd-webui-modelscope-text2video - Auto1111 extension consisting of implementation of text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies [Moved to: https://github.com/deforum-art/sd-webui-text2video]
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
ebsynth_utility - AUTOMATIC1111 UI extension for creating videos using img2img and ebsynth.
stable-diffusion-webui - Stable Diffusion web UI
VideoCrafter - VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
Pallaidium - Generative AI for the Blender VSE: Text, video or image to video, image and audio in Blender Video Sequence Editor.
sd-webui-dragGAN-extension - extension of stable diffusion webui for dragGAN
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
sd-webui-deforum - Deforum extension for AUTOMATIC1111's Stable Diffusion webui