VideoCrafter
VideoCrafter | sd_dreambooth_extension | |
---|---|---|
6 | 115 | |
4,146 | 1,834 | |
4.6% | - | |
6.9 | 8.7 | |
7 days ago | about 2 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
VideoCrafter
- GitHub - AILab-CVC/VideoCrafter: VideoCrafter1: Open Diffusion Models for High-Quality Video Generation
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
Video Crafter (Generate 8-second vidoes using a text prompt) https://github.com/VideoCrafter/VideoCrafter (Video Crafter - GitHub) https://huggingface.co/VideoCrafter/t2v-version-1-1/tree/main/models (Video Crafter Model Checkpoints) -------UPSCALE--------
- Joe Biden vs. Shakira - VideoCrafter (Video2Video)
- VideoCrafter:a Toolkit for Text-to-Video Generation and Editing
- New 1.2B parameter text to video model is out: Latent Video Diffusion Models for High-Fidelity Long Video Generation
-
New 1.2B parameter text to video model is out, higher quality than modelscope
github: https://github.com/VideoCrafter/VideoCrafter
sd_dreambooth_extension
- SDXL Training for Auto1111 is now Working on a 24GB Card
-
(Requesting Help)
I am trying to use StableDiffusion via AUTOMATIC1111 with the Dreambooth extension
-
it will be an absolute madness when sdxl becomes standard model and we start getting other models from it
When I first attempted SD training, I was very frustrated. It wasn't until I found this obscure forum thread on Github that I actually started producing great results with Dreambooth. Because I have such satisfactory results, I'm very reluctant to beat my brains against LoRa and its related training techniques. I gave up trying to train TI embeddings a long time ago. And I never figured out how to train or how to use hypernetworks. I've only been able to get good results with Dreambooth directly because of that thread I linked above. I make LoRas by extracting them from Dreambooth-trained checkpoints. And I have no idea if I'm doing the extractions the right way or not.
-
"Exception training model: ' Some tensors share memory" with Dreambooth on Vladmatic
Getting the same with automatic1111 and sd_dreambooth extension. Check out more here in the issues log: https://github.com/d8ahazard/sd_dreambooth_extension/issues/1266
-
Yo, DreamBooth gatekeepers, SHARE YOUR HYPERPARAMETERS, please.
It's several moths old and many things have changed. But the spreadsheet available through this thread on Github has been indispensable for me when I train Dreambooth models. I'm astounded no one talks about it. I bring it up all the time. The research presented there should be continued. I'd love to see similar research done for SD v2.1.
-
What is the BEST solution for hyper realistic person training?
Training rate is paramount. Read this Github thread.
-
How do you train your LoRAs, 1 Epoch or >1 Epoch (same # of steps)?
https://github.com/d8ahazard/sd_dreambooth_extension/discussions/547/ (in depth training principles understanding)
-
Struggling to install Dreambooth
sd_dreambooth_extension https://github.com/d8ahazard/sd_dreambooth_extension.git main 926ae204 Fri Mar 31 15:12:45 2023 unknown
- Attempting to train a lora with RTX 2060 6 GB vRAM, how to go about this?
-
SD just released an open source version of their GUI called StableStudio
also the Dreambooth extension supports API (https://github.com/d8ahazard/sd_dreambooth_extension/blob/main/scripts/api.py) so i'm not sure where do you get those news :/
What are some alternatives?
sd-webui-modelscope-text2video - Auto1111 extension consisting of implementation of text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies [Moved to: https://github.com/deforum-art/sd-webui-text2video]
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
Text-To-Video-Finetuning - Finetune ModelScope's Text To Video model using Diffusers 🧨
kohya_ss
sd-webui-deforum - Deforum extension for AUTOMATIC1111's Stable Diffusion webui
kohya-trainer - Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
stable-diffusion-webui-normalmap-script - Normal Maps for Stable Diffusion WebUI
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
sd-webui-text2video - Auto1111 extension implementing text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies
dreambooth-training-guide
stable-diffusion - A latent text-to-image diffusion model
sd-scripts