brev-cli
brev-cli | sd_dreambooth_extension | |
---|---|---|
7 | 115 | |
198 | 1,834 | |
1.5% | - | |
7.9 | 8.7 | |
5 days ago | about 2 months ago | |
Go | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
brev-cli
- Brev: Start fine-tuning and training models in < 10 minutes
- OpenLLaMA: An Open Reproduction of LLaMA
-
Using the cloud or buying a GPU
I don't have a PC right now that will run StableDiffusion. I can build one but I think I'm going to need a pretty powerful GPU which I'm not sure I can afford right now. I started using something called Brev https://brev.dev/ (no, I don't work there just found it searching). It's pretty affordable and super easy to setup.
-
is there a good guide on how to train an ai to simulate your own art work?
I just finished listening to an episode of the Practical AI podcast, where they talked with Nader Khalil from brev.dev. They talked a little bit about setting up dreambooth and training it with ten images in about 4 minutes. I havent tested it, but it is worth a try. Brev.dev is a way to set up virtual machines and developement environments. Would love to heard from people who have used it.
- New AI edits images based on text instructions (instructPix2Pix/imaginAIry)
-
Tensorbook
R.I.P. battery.
Personally I've been using Brev [1] to do my cloud training, you get a cloud GPU instance that you can upgrade/downgrade on the fly, and makes supports VS Code out of the box.
[1] https://brev.dev/
- Brev
sd_dreambooth_extension
- SDXL Training for Auto1111 is now Working on a 24GB Card
-
(Requesting Help)
I am trying to use StableDiffusion via AUTOMATIC1111 with the Dreambooth extension
-
it will be an absolute madness when sdxl becomes standard model and we start getting other models from it
When I first attempted SD training, I was very frustrated. It wasn't until I found this obscure forum thread on Github that I actually started producing great results with Dreambooth. Because I have such satisfactory results, I'm very reluctant to beat my brains against LoRa and its related training techniques. I gave up trying to train TI embeddings a long time ago. And I never figured out how to train or how to use hypernetworks. I've only been able to get good results with Dreambooth directly because of that thread I linked above. I make LoRas by extracting them from Dreambooth-trained checkpoints. And I have no idea if I'm doing the extractions the right way or not.
-
"Exception training model: ' Some tensors share memory" with Dreambooth on Vladmatic
Getting the same with automatic1111 and sd_dreambooth extension. Check out more here in the issues log: https://github.com/d8ahazard/sd_dreambooth_extension/issues/1266
-
Yo, DreamBooth gatekeepers, SHARE YOUR HYPERPARAMETERS, please.
It's several moths old and many things have changed. But the spreadsheet available through this thread on Github has been indispensable for me when I train Dreambooth models. I'm astounded no one talks about it. I bring it up all the time. The research presented there should be continued. I'd love to see similar research done for SD v2.1.
-
What is the BEST solution for hyper realistic person training?
Training rate is paramount. Read this Github thread.
-
How do you train your LoRAs, 1 Epoch or >1 Epoch (same # of steps)?
https://github.com/d8ahazard/sd_dreambooth_extension/discussions/547/ (in depth training principles understanding)
-
Struggling to install Dreambooth
sd_dreambooth_extension https://github.com/d8ahazard/sd_dreambooth_extension.git main 926ae204 Fri Mar 31 15:12:45 2023 unknown
- Attempting to train a lora with RTX 2060 6 GB vRAM, how to go about this?
-
SD just released an open source version of their GUI called StableStudio
also the Dreambooth extension supports API (https://github.com/d8ahazard/sd_dreambooth_extension/blob/main/scripts/api.py) so i'm not sure where do you get those news :/
What are some alternatives?
EasyLM - Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
SRNet - A tensorflow reproducing of paper “Editing Text in the wild”
kohya_ss
open_llama - OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
kohya-trainer - Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
modal-examples - Examples of programs built using Modal
dreambooth-training-guide
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
sd-scripts