Text-To-Video-Finetuning
alpaca-lora
Text-To-Video-Finetuning | alpaca-lora | |
---|---|---|
19 | 107 | |
507 | 18,280 | |
- | - | |
10.0 | 3.6 | |
6 months ago | 3 months ago | |
Python | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Text-To-Video-Finetuning
-
Announcing zeroscope_v2_XL: a new 1024x576 video model based on Modelscope
I used this repo for the finetuning: https://github.com/ExponentialML/Text-To-Video-Finetuning
- Inspired by u/Many-Ad-6225's Mortal Kombat remastering post, test of a Liu Kang animation x4 upscale (ModelScope vid2vid)
- Text-to-Video Model Fine-Tuned with 512x512 Anime-Style for Diffusers
- How do you custom train Modelscope?
-
ModelScope Finetuning
Has anyone successfully done this? I walked through the steps and did not find what I wanted so wanting to know if anyone has a tutorial about fine-tuning Modelscope with https://github.com/ExponentialML/Text-To-Video-Finetuning
-
What will happen once AI is capable of letting 1 person make a whole Hollywood-quality film?
Well, actually today all I have is ModelScope txt2video, SadTalker and an understanding of how this technology works, but pretty soon I'll have this https://github.com/ExponentialML/Text-To-Video-Finetuning/pull/27 too. Then whatever advancements things like https://ai.facebook.com/blog/dino-v2-computer-vision-self-supervised-learning/ unlock will filter down to me too and so on it will go. My understanding of the tech will continue to deepen as I continue to retrain from traditional software engineering to machine learning. Things like Culitho (https://www.anandtech.com/show/18792/nvidias-culitho-to-speed-up-computational-lithography-for-2nm-and-beyond) and AlphaTensor (https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor) will continue to make compute faster and more affordable driving the cost of training/inference down and massively increasing the accessibility. Increasingly more functions will continue to be approximated closer and closer (https://www.youtube.com/watch?v=0QczhVg5HaI).
-
Animov-0.1 — High-resolution anime fine-tune of ModelScope text2video is now available in Auto1111! Trained on 384x384 anime fragments by strangeman3107, makes 2 seconds long videos with only 8.6G of VRAM (16 frames at 8 fps)
Made by strangeman3107 via https://github.com/ExponentialML/Text-To-Video-Finetuning. The original Diffusers weights https://huggingface.co/datasets/strangeman3107/animov-0.1
Just as one of Deforum Discord's server members linked me it, I was so inspired that I quickly wrote the Diffusers->pth (ModelScope original format) conversion script
-
Auto1111 text2video Major Update! Animate pictures and loop videos with inpainting keyframes. 125 frames (8 secs) video now takes only 12gbs of VRAM thanks to torch2 optimization. WebAPI is released, no delay between runs! (ModelScope)
Yes, there's a Diffusers based repo https://github.com/ExponentialML/Text-To-Video-Finetuning.
- sd-webui-text2video has been updated and now it works with Xformers
alpaca-lora
-
How to deal with loss for SFT for CausalLM
Here is a example: https://github.com/tloen/alpaca-lora/blob/main/finetune.py
-
How to Finetune Llama 2: A Beginner's Guide
In this blog post, I want to make it as simple as possible to fine-tune the LLaMA 2 - 7B model, using as little code as possible. We will be using the Alpaca Lora Training script, which automates the process of fine-tuning the model and for GPU we will be using Beam.
-
Fine-tuning LLMs with LoRA: A Gentle Introduction
Implement the code in Llama LoRA repo in a script we can run locally
-
Newbie here - trying to install a Alpaca Lora and hitting an error
Hi all - relatively new to GitHub / programming in general, and I wanted to try to set up Alpaca Lora locally. Following the guide here: https://github.com/tloen/alpaca-lora
-
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
Follow up the popular work of u/tloen alpaca-lora, I wrapped the setup of alpaca_lora_4bit to add support for GPTQ training in form of installable pip packages. You can perform training and inference with multiple quantizations method to compare the results.
- FLaNK Stack Weekly for 20 June 2023
-
Converting to GGML?
If instead you want to apply a LoRa to a pytorch model, a lot of people use this script to apply to LoRa to the 16 bit model and then quantize it with a GPTQ program afterwards https://github.com/tloen/alpaca-lora/blob/main/export_hf_checkpoint.py
-
Simple LLM Watermarking - Open Lllama 3b LORA
There are a few papers on watermarking LLM output, but from what I have seen they all use complex methods of detection to allow the watermark to go unseen by the end user, only to be detected by algorithm. I believe that a more overt system of watermarking might also be beneficial. One simple method that I have tried is character substitution. For this model, I LORA finetuned openlm-research/open_llama_3b on the alpaca_data_cleaned_archive.json dataset from https://github.com/tloen/alpaca-lora/ modified by replacing all instances of the "." character in the outputs with a "á¾¾" The results are pretty good, with the correct the correct substitutions being generated by the model in most cases. It doesn't always work, but this was only a LORA training and for two epochs of 400 steps each, and 100% substitution isn't really required.
-
text-generation-webui's "Train Only After" option
I am kind of new to finetuning LLM's and am not able to understand what this option exactly refers to. I guess it has the same meaning as the "train_on_inputs" parameter of alpacalora though.
-
Learning sources on working with local LLMs
Read the paper and also: https://github.com/tloen/alpaca-lora
What are some alternatives?
sd-webui-modelscope-text2video - Auto1111 extension consisting of implementation of text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies [Moved to: https://github.com/deforum-art/sd-webui-text2video]
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
stable-diffusion-webui - Stable Diffusion web UI
llama.cpp - LLM inference in C/C++
VideoCrafter - VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
gpt4all - gpt4all: run open-source LLMs anywhere
Pallaidium - Generative AI for the Blender VSE: Text, video or image to video, image and audio in Blender Video Sequence Editor.
llama - Inference code for Llama models
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
ggml - Tensor library for machine learning