runpodctl
EveryDream2trainer
runpodctl | EveryDream2trainer | |
---|---|---|
8 | 48 | |
216 | 752 | |
4.6% | - | |
9.4 | 9.2 | |
about 1 month ago | 9 days ago | |
Go | Python | |
GNU General Public License v3.0 only | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
runpodctl
- Ask HN: What's the best hardware to run small/medium models locally?
-
Old Timer needs help setting up stable diffusion. Extremely confused.
You can rent a GPU on https://www.runpod.io/, which also has stable diffusion templates so any time you start the GPU SD will be preinstalled for you, ready to use:)
-
i need some help guys
Another option is to use a service like www.runpod.io to rent time on more powerful systems. A few times a week I’ll load up whatever the latest 13B or 20/24B (and even low bpw 70Bs with EXL2) model is on a system with an rtx 3090 for $0.44/hr, and sometimes I’ll treat myself to an A6000 system to run 4bit 70Bs for $0.79/hr. They also offer A4000 systems with 16GB VRAM which is plenty to run a 4bpw 13B EXL2 model or an 8bpw 7B model, and those systems are just $0.36/hr.
- GPT-3.5 Turbo fine-tuning and API updates
-
What's the best (and cheap) way to try out all the new LLMs on cloud services.
Many people use sites like runpod for this.
-
Looking for Paperspace (or equivalent) Help
You can rent time on systems on www.runpod.io with a 48GB A6000 for $0.50/hr spot pricing and $0.79/hr regular pricing. 3090s can be had for $0.29/hr spot pricing and $.44/hr regular.
-
only seeing disk cache slider, no gpu anything?
Keep in mind you can also use www.runpod.io to rent access to systems with a 3090 for about $.45/hr. It might be significantly cheaper ir at least more affordable to do this for a few hours a week instead of dropping $1,000 on a new laptop. This is what I personally do (I generally use it in the evening, and can get an Nvidia A6000 with 48GB VRAM for $.49/hr spot pricing). This lets me play with the latest 33B and 65B models, with really fast replies, and I spend maybe $5-7 a week if I use it a lot.
-
What is best bang for buck persistent virtual GPU rental to run SD?
you can also connect the volume to other cloud storage (pcloud, dropbox, etc) (look for the option cloud sync in the "my pods" console) , or use python libraries to pull/push files from S3 buckets / dropbox / ftps / your NAS / etc, or use their docker tools to transfer files between your local PC and the volumehttps://github.com/runpod/runpodctl/blob/main/README.md
EveryDream2trainer
- Question on SD Finetuning
-
80% Completed!
I'm using EveryDream2 with SD v1 based models. You can define whatever resolution you want for training, as long as your Vram allows it.
-
Freedom. Finetuned 2.1 that can gen (+1024x) and often without negative. Release this or next week. Demo available for testing in next days. Here are some creations from a closed beta that I released on Twitter yesterday. 20 people, 3 hours, 1500 gens. I hope you enjoy. More on imgur album.
There is a demo optimizers settings here that uses special settings for the text encoder: https://github.com/victorchall/EveryDream2trainer/blob/main/optimizerSD21.json
- Can we clear up the regularization images concept once and for all?
-
Are there any recent, or still relevant, tutorials on training LoRAs within Dreambooth? Any specific / special settings to take advantage of my 4090?
If you have a large dataset of pictures, I'd recommend https://github.com/victorchall/EveryDream2trainer instead of Dreambooth. It has decent documentation (well for an open source project that is) and it has a very nice validation feature (disabled by default) which actually gives you good feedback on how the training is progressing.
-
Train a model from 300k images?
EveryDream2 can handle this. They even have tools to help you autocaption using BLIP. https://github.com/victorchall/EveryDream2trainer
- [Dreambooth] The docs for this Dreambooth-like trainer, Everydream2
- Resources for artists interesting in using StableDiffusion as a tool?
- Is Joe Penna's DreamBooth still the best option for training photorealistic persons or faces?
-
Can we identify most Stable Diffusion Model issues with just a few circles?
I recoment EveryDream2 for training, it has a lot of nice features. I'm not sure there is a proper manual to learn how to train, but there is a lot of information available. I have been learning this subjects for a few months myself.
What are some alternatives?
chroma - the AI-native open-source embedding database
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
StableTuner - Finetuning SD in style.
KoboldAI-Runpod - This is just a simple set of notebooks to load koboldAI and SillyTavern Extras on a runpod with Pytorch 2.0.1 Template
sd-scripts
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
EveryDream-trainer - General fine tuning for Stable Diffusion
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
kohya-trainer - Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
fast-stable-diffusion - fast-stable-diffusion + DreamBooth
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.