unprompted
distributed-diffusion
unprompted | distributed-diffusion | |
---|---|---|
47 | 9 | |
750 | 140 | |
- | - | |
8.2 | 10.0 | |
10 days ago | 11 months ago | |
Python | Python | |
- | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
unprompted
-
Unprompted v10 Released: New faceswap features, GPEN support, Civitai shortcode and more! 😊
I'm pleased to announce the release of Unprompted v10.0.0, the Swiss Army knife extension for A1111. This is a major update that brings a number of new features and improvements, including:
-
In the Automatic1111 Web UI, is it possible to get ADetailer working inside Deforum?
Unprompted: https://github.com/ThereforeGames/unprompted
-
Creating a randomized crowd with various expression through txt2img with adetailer + dynamic prompt extension
you can get the same effect with unprompted zoom enhance feature. just paste this into the prompt field.
-
Txt2mask now supports batch mode, plus many more new features!
It's been a while since my last Unprompted post, so I wanted to share some of the new things you can do with this extension. 🙂
-
Where do the prompts come from?
Also if you're curious, check out the Unprompted extension's implementation of img2pez for a1111 in img2img. Basic gist is that it uses machine learning to examine the image and tries to find tokens that would likely produce that image. I've found it to be pretty far off the mark most the time, but the terms it gives you can actually be quite useful.
-
Why isn't there a "Hand restore" option like Face Restore?
There's a feature in unprompted that does this.
-
Dynamic Prompt wildcards not being random?
One thing to consider: there is an extension that will let you write some scripts as prompts, and it extends what you can do with wildcards. https://github.com/ThereforeGames/unprompted.git
-
[Zoom Enhance] - 7.0 - 9.2 Either doesn't work at all; Doesn't stitch or Makes barely any changes; Desperate to FIX
Solution for 9.2.0: Use modified https://github.com/ThereforeGames/unprompted/issues/84 you posted. This made a stitch image in %TEMP% and had the modified image displayed in Auto1111 GUI. Only reason I am @ing is because if I had this issue on 9.2.0 I'm sure others are. I had to use FULL shortcode in wizard zoom enhance section; I had to use FULL shortcode in wizard zoom enhance section; [if batch_index=0][after][zoom_enhance show_original mask='face' replacement='[insert prompt]' mask_sort_method='left-to-right' upscale_method='Nearest Neighbor' downscale_method='Lanczos' blur_size=0.03 cfg_scale_min=3.0 denoising_max=0.65 mask_size_max=0.3 mask_method='clipseg' sharpen_amount=1.0 color_correct_method='none' color_correct_timing='pre' color_correct_strength=1.0 min_area=50.0 contour_padding=0.0 upscale_width=512.0 upscale_height=512.0 hires_size_max=1024.0][/after][/if]
-
does SD do anything behind the scenes
It was a long time ago I used it but I think it takes some learning I would suggest reading the starter guide the short code relates to the extra detail but you need to learn how to write it I'm pretty sure there is a built in helper sorry I haven't used it extensively there must be a video guide on YouTube too there is for almost all of this ai stuff https://github.com/ThereforeGames/unprompted/blob/main/docs/GUIDE.md
-
Unprompted Extension No longer Upscaling or Fixing Faces etc
And the updated extension here: https://github.com/ThereforeGames/unprompted
distributed-diffusion
-
What is Midjourney doing better than us?
noob here, dunno nothing about how community could contribute be reinforcing a shared training but this is maybe what we should aim. Imagin users contributing in training a large models, with a system of upvotes like midjourney)... They have control over the model and reionforcing that. We are fragmented in multiple models, loras and such. Everyone focusin on different things. Made some researches time ago and ended up here: https://github.com/chavinlo/distributed-diffusion and this https://learning-at-home.github.io/
-
Training Stable Diffusion from Scratch Costs <$160k
Yes, hivemind trained a gpt 6B model like this.
General model training https://github.com/learning-at-home/hivemind
Stable diffusion specific https://github.com/chavinlo/distributed-diffusion
Inference only stable diffusion https://stablehorde.net/
- Distributed training
- SETI@home type model for training Stable Diffusion?
-
Decentralized Training - Train models over the internet!
Github Repo: https://github.com/chavinlo/distributed-diffusion
-
Community driven distributed diffusion training
There has been some effort to train diffusion on distributed community hardware. Mainly https://github.com/chavinlo/distributed-diffusion it is based on https://learning-at-home.github.io/ . If this takes off then we can train stable diffusion together as a community without relying on any company.
-
Colaboratory Dreambooth Training
If you want to try it on your own, you can check the repo https://github.com/chavinlo/distributed-diffusion or the discord https://discord.gg/xVsyrmhQWS
- We need as a community to train Stable Diffusion by ourselves so that new models remain opensource
What are some alternatives?
stable-diffusion-webui-promptgen - stable-diffusion-webui-promptgen
boinc - Open-source software for volunteer computing and grid computing.
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
stablediffusionAnime - High-Resolution Image Synthesis with Latent Diffusion Models
A1111-Web-UI-Installer - Complete installer for Automatic1111's infamous Stable Diffusion WebUI
diffusion-benchmark
a1111-sd-webui-tagcomplete - Booru style tag autocompletion for AUTOMATIC1111's Stable Diffusion web UI
hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
ControlNet - Let us control diffusion models!
artbot-for-stable-diffusion - A front-end GUI for interacting with the AI Horde / Stable Diffusion distributed cluster
sd-webui-controlnet - WebUI extension for ControlNet