sd-webui-modelscope-text2video
Text-To-Video-Finetuning
sd-webui-modelscope-text2video | Text-To-Video-Finetuning | |
---|---|---|
17 | 19 | |
479 | 507 | |
- | - | |
10.0 | 10.0 | |
about 1 year ago | 5 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sd-webui-modelscope-text2video
-
New 1.2B parameter text to video model is out, higher quality than modelscope
Working on it https://github.com/deforum-art/sd-webui-modelscope-text2video/pull/96 (also, will rename the repo to just sd-webui-text2video after that)
- This is fine
-
Can someone help me understand what happens with VRAM?
They're linked from the main project's REDME.md under the "Where to get the weights" heading. (https://github.com/deforum-art/sd-webui-modelscope-text2video)
- Trump VS Godzilla - ModelScope + Img2Img
- I'm the creator of LoRA. How can I make it better?
-
"Melting World" - Text To Video
Workflow: Text to video AUTO1111 extension https://github.com/deforum-art/sd-webui-modelscope-text2video
-
Wake up, samurai! ModelScope text2video fine-tuning repo just dropped! Based on Diffusers, requirements start from GTX 3090 at the moment
Please, give it a try and leave your feedback. Soon fine-tuned models are planned to be usable in the Auto1111 plugin https://github.com/deforum-art/sd-webui-modelscope-text2video/issues/48 as well
-
ModelScope text2video is reported to be running at 4 GBs of VRAM with enough effort โ still, help needed to bring more optimizations and streamline the process
Meanwhile, if you have good training vids, it'd be nice to collect them somewhere for the future training, like inside the extension repo's Discussions https://github.com/deforum-art/sd-webui-modelscope-text2video/discussions
- The kind of result I'm getting with the new A1111 MS text2video model on a RTX 3060 (12 GB)
-
"The Rise Of AI" - Text To Video Short Film
Workflow: Text to video AUTO1111 extension https://github.com/deforum-art/sd-webui-modelscope-text2video
Text-To-Video-Finetuning
-
Announcing zeroscope_v2_XL: a new 1024x576 video model based on Modelscope
I used this repo for the finetuning: https://github.com/ExponentialML/Text-To-Video-Finetuning
- Inspired by u/Many-Ad-6225's Mortal Kombat remastering post, test of a Liu Kang animation x4 upscale (ModelScope vid2vid)
- Text-to-Video Model Fine-Tuned with 512x512 Anime-Style for Diffusers
- How do you custom train Modelscope?
-
ModelScope Finetuning
Has anyone successfully done this? I walked through the steps and did not find what I wanted so wanting to know if anyone has a tutorial about fine-tuning Modelscope with https://github.com/ExponentialML/Text-To-Video-Finetuning
-
What will happen once AI is capable of letting 1 person make a whole Hollywood-quality film?
Well, actually today all I have is ModelScope txt2video, SadTalker and an understanding of how this technology works, but pretty soon I'll have this https://github.com/ExponentialML/Text-To-Video-Finetuning/pull/27 too. Then whatever advancements things like https://ai.facebook.com/blog/dino-v2-computer-vision-self-supervised-learning/ unlock will filter down to me too and so on it will go. My understanding of the tech will continue to deepen as I continue to retrain from traditional software engineering to machine learning. Things like Culitho (https://www.anandtech.com/show/18792/nvidias-culitho-to-speed-up-computational-lithography-for-2nm-and-beyond) and AlphaTensor (https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor) will continue to make compute faster and more affordable driving the cost of training/inference down and massively increasing the accessibility. Increasingly more functions will continue to be approximated closer and closer (https://www.youtube.com/watch?v=0QczhVg5HaI).
-
Animov-0.1 โ High-resolution anime fine-tune of ModelScope text2video is now available in Auto1111! Trained on 384x384 anime fragments by strangeman3107, makes 2 seconds long videos with only 8.6G of VRAM (16 frames at 8 fps)
Made by strangeman3107 via https://github.com/ExponentialML/Text-To-Video-Finetuning. The original Diffusers weights https://huggingface.co/datasets/strangeman3107/animov-0.1
Just as one of Deforum Discord's server members linked me it, I was so inspired that I quickly wrote the Diffusers->pth (ModelScope original format) conversion script
-
Auto1111 text2video Major Update! Animate pictures and loop videos with inpainting keyframes. 125 frames (8 secs) video now takes only 12gbs of VRAM thanks to torch2 optimization. WebAPI is released, no delay between runs! (ModelScope)
Yes, there's a Diffusers based repo https://github.com/ExponentialML/Text-To-Video-Finetuning.
- sd-webui-text2video has been updated and now it works with Xformers
What are some alternatives?
diffusers - ๐ค Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
sd-webui-additional-networks
stable-diffusion-webui - Stable Diffusion web UI
VideoCrafter - VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
sd-webui-text2video - Auto1111 extension implementing text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies
Pallaidium - Generative AI for the Blender VSE: Text, video or image to video, image and audio in Blender Video Sequence Editor.
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
modelscope - ModelScope: bring the notion of Model-as-a-Service to life.
kohya_ss