Text-To-Video-Finetuning
ainodes-engine | Text-To-Video-Finetuning | |
---|---|---|
24 | 19 | |
250 | 507 | |
- | - | |
9.3 | 10.0 | |
about 1 month ago | 5 months ago | |
Python | Python | |
GNU Lesser General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ainodes-engine
-
We need 3d interface for stable diffusion
currently working on a pyqt node based app, qt has a lot of 3d power. XmYx/ainodes-engine (github.com)
-
aiNodes - daily (comfy) update
1: git clone -b comfy-dev https://github.com/XmYx/ainodes-engine 2: cd ainodes-engine 3: setup_ainodes.bat then you can start ainodes the usual way, but also ComfyUI with run_comfyui.bat. If you install any custom node package for comfy using the comfy manager, those get imported into the desktop engine on next start. Compatibility is WIP, but most image and sampling nodes will work. (Disco Diffusion too) To install the Comfy Manager run the following from ainodes-engine folder: 1: cd src 2: cd ComfyUI 3: cd custom_nodes 4: git clone https://github.com/ltdrdata/ComfyUI-Manager.git
-
aiNodes
XmYx/ainodes-engine (github.com) patreon.com/deforum_ainodes
- AUTOMATIC1111 New Extension - Kandinsky
- aiNodes - daily update - Example Graphs with Help
-
I created another stable diffusion UI.
And take a look at the node engine if you are interested: https://github.com/XmYx/ainodes-engine
- aiNodes - work continues
-
I will pay someone to make me a simple UI for SD
You can see some of our previous open source works at: XmYx/ainodes-engine (github.com)
Text-To-Video-Finetuning
-
Announcing zeroscope_v2_XL: a new 1024x576 video model based on Modelscope
I used this repo for the finetuning: https://github.com/ExponentialML/Text-To-Video-Finetuning
- Inspired by u/Many-Ad-6225's Mortal Kombat remastering post, test of a Liu Kang animation x4 upscale (ModelScope vid2vid)
- Text-to-Video Model Fine-Tuned with 512x512 Anime-Style for Diffusers
- How do you custom train Modelscope?
-
ModelScope Finetuning
Has anyone successfully done this? I walked through the steps and did not find what I wanted so wanting to know if anyone has a tutorial about fine-tuning Modelscope with https://github.com/ExponentialML/Text-To-Video-Finetuning
-
What will happen once AI is capable of letting 1 person make a whole Hollywood-quality film?
Well, actually today all I have is ModelScope txt2video, SadTalker and an understanding of how this technology works, but pretty soon I'll have this https://github.com/ExponentialML/Text-To-Video-Finetuning/pull/27 too. Then whatever advancements things like https://ai.facebook.com/blog/dino-v2-computer-vision-self-supervised-learning/ unlock will filter down to me too and so on it will go. My understanding of the tech will continue to deepen as I continue to retrain from traditional software engineering to machine learning. Things like Culitho (https://www.anandtech.com/show/18792/nvidias-culitho-to-speed-up-computational-lithography-for-2nm-and-beyond) and AlphaTensor (https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor) will continue to make compute faster and more affordable driving the cost of training/inference down and massively increasing the accessibility. Increasingly more functions will continue to be approximated closer and closer (https://www.youtube.com/watch?v=0QczhVg5HaI).
-
Animov-0.1 โ High-resolution anime fine-tune of ModelScope text2video is now available in Auto1111! Trained on 384x384 anime fragments by strangeman3107, makes 2 seconds long videos with only 8.6G of VRAM (16 frames at 8 fps)
Made by strangeman3107 via https://github.com/ExponentialML/Text-To-Video-Finetuning. The original Diffusers weights https://huggingface.co/datasets/strangeman3107/animov-0.1
Just as one of Deforum Discord's server members linked me it, I was so inspired that I quickly wrote the Diffusers->pth (ModelScope original format) conversion script
-
Auto1111 text2video Major Update! Animate pictures and loop videos with inpainting keyframes. 125 frames (8 secs) video now takes only 12gbs of VRAM thanks to torch2 optimization. WebAPI is released, no delay between runs! (ModelScope)
Yes, there's a Diffusers based repo https://github.com/ExponentialML/Text-To-Video-Finetuning.
- sd-webui-text2video has been updated and now it works with Xformers
What are some alternatives?
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
sd-webui-modelscope-text2video - Auto1111 extension consisting of implementation of text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies [Moved to: https://github.com/deforum-art/sd-webui-text2video]
sd-webui-additional-networks
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
kandinsky2-simplegui - Simple local gui to play with Kandinsky 2
stable-diffusion-webui - Stable Diffusion web UI
diffuzers - a web ui & api for ๐ค diffusers
VideoCrafter - VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
kandinsky-for-automatic1111 - Automatic1111 extension adding support for Kandinsky 2.X
Pallaidium - Generative AI for the Blender VSE: Text, video or image to video, image and audio in Blender Video Sequence Editor.
stable-diffusion-webui-extensions - Extension index for stable-diffusion-webui