Text-To-Video-Finetuning
lora
Text-To-Video-Finetuning | lora | |
---|---|---|
19 | 83 | |
507 | 6,690 | |
- | - | |
10.0 | 0.0 | |
6 months ago | 2 months ago | |
Python | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Text-To-Video-Finetuning
-
Announcing zeroscope_v2_XL: a new 1024x576 video model based on Modelscope
I used this repo for the finetuning: https://github.com/ExponentialML/Text-To-Video-Finetuning
- Inspired by u/Many-Ad-6225's Mortal Kombat remastering post, test of a Liu Kang animation x4 upscale (ModelScope vid2vid)
- Text-to-Video Model Fine-Tuned with 512x512 Anime-Style for Diffusers
- How do you custom train Modelscope?
-
ModelScope Finetuning
Has anyone successfully done this? I walked through the steps and did not find what I wanted so wanting to know if anyone has a tutorial about fine-tuning Modelscope with https://github.com/ExponentialML/Text-To-Video-Finetuning
-
What will happen once AI is capable of letting 1 person make a whole Hollywood-quality film?
Well, actually today all I have is ModelScope txt2video, SadTalker and an understanding of how this technology works, but pretty soon I'll have this https://github.com/ExponentialML/Text-To-Video-Finetuning/pull/27 too. Then whatever advancements things like https://ai.facebook.com/blog/dino-v2-computer-vision-self-supervised-learning/ unlock will filter down to me too and so on it will go. My understanding of the tech will continue to deepen as I continue to retrain from traditional software engineering to machine learning. Things like Culitho (https://www.anandtech.com/show/18792/nvidias-culitho-to-speed-up-computational-lithography-for-2nm-and-beyond) and AlphaTensor (https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor) will continue to make compute faster and more affordable driving the cost of training/inference down and massively increasing the accessibility. Increasingly more functions will continue to be approximated closer and closer (https://www.youtube.com/watch?v=0QczhVg5HaI).
-
Animov-0.1 — High-resolution anime fine-tune of ModelScope text2video is now available in Auto1111! Trained on 384x384 anime fragments by strangeman3107, makes 2 seconds long videos with only 8.6G of VRAM (16 frames at 8 fps)
Made by strangeman3107 via https://github.com/ExponentialML/Text-To-Video-Finetuning. The original Diffusers weights https://huggingface.co/datasets/strangeman3107/animov-0.1
Just as one of Deforum Discord's server members linked me it, I was so inspired that I quickly wrote the Diffusers->pth (ModelScope original format) conversion script
-
Auto1111 text2video Major Update! Animate pictures and loop videos with inpainting keyframes. 125 frames (8 secs) video now takes only 12gbs of VRAM thanks to torch2 optimization. WebAPI is released, no delay between runs! (ModelScope)
Yes, there's a Diffusers based repo https://github.com/ExponentialML/Text-To-Video-Finetuning.
- sd-webui-text2video has been updated and now it works with Xformers
lora
-
You can now train a 70B language model at home
Diffusion unet has an "extended" version nowadays that applies to the resnet part as well as the cross-attention: https://github.com/cloneofsimo/lora
-
How it feels right now
Absolutely. But that doesn't matter because you only have to train it at scale, once. There are papers released already that show it's possible to update weights in small sections. You won't have to wait for the next monolithic LLM to drop to get up to date information. It will start to learn in bits and pieces.
-
LoRA tuning in julia
No, it's a deep learning thing
-
What does Lora mean?
Low Rank Adaptation of Large Language Models.
-
[D] An ELI5 explanation for LoRA - Low-Rank Adaptation.
Recently, I have seen the LoRA technique (Low-Rank Adaptation of Large Language Models) as a popular method for fine-tuning LLMs and other models.
-
Combining LoRA, Retro, and Large Language Models for Efficient Knowledge Retrieval and Retention
Enter LoRA, a method proposed for adapting pre-trained models to specific tasks[2]. By freezing pre-trained model weights and injecting trainable rank decomposition matrices into the transformer architecture, LoRA can reduce the number of trainable parameters and the GPU memory requirement, making the adaptation of LLMs for downstream tasks more feasible.
-
100K Context Windows
Open-source LLM projects have largely solved this using Low-Rank Adaptation of Large Language Models (LoRA): https://arxiv.org/abs/2106.09685
Apparently an RTX 4090 running overnight is sufficient to produce a fine-tuned model that can spit out new Harry Potter stories, or whatever...
-
President Biden meets with AI CEOs at the White House amid ethical criticism
Alpaca was trained for $600 ($100 for the smaller model) and offers outputs competitive with ChatGTP. https://arxiv.org/abs/2106.09685
- LoRA: Low-Rank Adaptation of Large Language Models
- LORA: Low-Rank Adaptation of Large Language Models
What are some alternatives?
sd-webui-modelscope-text2video - Auto1111 extension consisting of implementation of text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies [Moved to: https://github.com/deforum-art/sd-webui-text2video]
stable-diffusion-webui - Stable Diffusion web UI
LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
VideoCrafter - VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
sd_dreambooth_extension
Pallaidium - Generative AI for the Blender VSE: Text, video or image to video, image and audio in Blender Video Sequence Editor.
kohya-trainer - Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
ControlNet - Let us control diffusion models!
kohya_ss
sd-webui-additional-networks