unprompted
diffusers
unprompted | diffusers | |
---|---|---|
47 | 105 | |
745 | 1,870 | |
- | - | |
8.2 | 7.0 | |
3 days ago | 11 months ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
unprompted
-
Unprompted v10 Released: New faceswap features, GPEN support, Civitai shortcode and more! ๐
I'm pleased to announce the release of Unprompted v10.0.0, the Swiss Army knife extension for A1111. This is a major update that brings a number of new features and improvements, including:
-
In the Automatic1111 Web UI, is it possible to get ADetailer working inside Deforum?
Unprompted: https://github.com/ThereforeGames/unprompted
-
Creating a randomized crowd with various expression through txt2img with adetailer + dynamic prompt extension
you can get the same effect with unprompted zoom enhance feature. just paste this into the prompt field.
-
Txt2mask now supports batch mode, plus many more new features!
It's been a while since my last Unprompted post, so I wanted to share some of the new things you can do with this extension. ๐
-
Where do the prompts come from?
Also if you're curious, check out the Unprompted extension's implementation of img2pez for a1111 in img2img. Basic gist is that it uses machine learning to examine the image and tries to find tokens that would likely produce that image. I've found it to be pretty far off the mark most the time, but the terms it gives you can actually be quite useful.
-
Why isn't there a "Hand restore" option like Face Restore?
There's a feature in unprompted that does this.
-
Dynamic Prompt wildcards not being random?
One thing to consider: there is an extension that will let you write some scripts as prompts, and it extends what you can do with wildcards. https://github.com/ThereforeGames/unprompted.git
-
[Zoom Enhance] - 7.0 - 9.2 Either doesn't work at all; Doesn't stitch or Makes barely any changes; Desperate to FIX
Solution for 9.2.0: Use modified https://github.com/ThereforeGames/unprompted/issues/84 you posted. This made a stitch image in %TEMP% and had the modified image displayed in Auto1111 GUI. Only reason I am @ing is because if I had this issue on 9.2.0 I'm sure others are. I had to use FULL shortcode in wizard zoom enhance section; I had to use FULL shortcode in wizard zoom enhance section; [if batch_index=0][after][zoom_enhance show_original mask='face' replacement='[insert prompt]' mask_sort_method='left-to-right' upscale_method='Nearest Neighbor' downscale_method='Lanczos' blur_size=0.03 cfg_scale_min=3.0 denoising_max=0.65 mask_size_max=0.3 mask_method='clipseg' sharpen_amount=1.0 color_correct_method='none' color_correct_timing='pre' color_correct_strength=1.0 min_area=50.0 contour_padding=0.0 upscale_width=512.0 upscale_height=512.0 hires_size_max=1024.0][/after][/if]
-
does SD do anything behind the scenes
It was a long time ago I used it but I think it takes some learning I would suggest reading the starter guide the short code relates to the extra detail but you need to learn how to write it I'm pretty sure there is a built in helper sorry I haven't used it extensively there must be a video guide on YouTube too there is for almost all of this ai stuff https://github.com/ThereforeGames/unprompted/blob/main/docs/GUIDE.md
-
Unprompted Extension No longer Upscaling or Fixing Faces etc
And the updated extension here: https://github.com/ThereforeGames/unprompted
diffusers
-
Useful Links
ShivamShrirao's Diffusers Pretrained diffusion models across multiple modalities.
-
DreamBooth fine-tuning failing to get the style
Like the title say I'm trying to fine-tune a model to match a style of a popular manhwa. I'm using the ShivamShrirao Google Colab to accomplish this.
-
How to resume Dreambooth training?
I am running the DreamBooth_Stable_Diffusion.ipynb notebook from ShivamShrirao locally on my machine. Let's say I have trained for 500 iterations and it hasn't converged yet. How do I make it resume training from that iteration so it can do another 500?
-
Non web-ui colab
My understanding, based on messages from an (alleged) representative of colabs, is that the webui is the problem, not SD itself. This also seems to be the consensus in the comments section of other posts. I have not yet seen a link to colab based webui alternatives so here is something I found from a tutorial. I am certain that there are better alternatives. Anyone have a better idea? This will still probably be useful to other people like me who are just messing around.
- [Stablediffusion] Guide pour DreamBooth avec 8 Go de vram sous Windows
-
Finally got Dreambooth running without errors... but is it even using the model I trained?
I'm running ShivamShrirao's fork of diffusers; ran into a fp16 issue and had to patch in a fix from the main branch ( #1567 ).
-
Shivam Stable Diffusion: Getting same example models repeatedly (SD + Dreambooth)
I am running Shivam Stable Diffusion Jupyter notebook: diffusers/DreamBooth_Stable_Diffusion.ipynb at main ยท ShivamShrirao/diffusers ยท GitHub.
- Running Stable Diffusion locally with personalized changes
- Can't create embedding's with dreambooth ckpt
-
Weird issue using Shivam's Diffuser notebook
Are you using this one? https://github.com/S
What are some alternatives?
stable-diffusion-webui-promptgen - stable-diffusion-webui-promptgen
stable-diffusion-webui - Stable Diffusion web UI
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
fast-stable-diffusion - fast-stable-diffusion + DreamBooth
A1111-Web-UI-Installer - Complete installer for Automatic1111's infamous Stable Diffusion WebUI
a1111-sd-webui-tagcomplete - Booru style tag autocompletion for AUTOMATIC1111's Stable Diffusion web UI
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
ControlNet - Let us control diffusion models!
efficient-dreambooth - [Moved to: https://github.com/smy20011/dreambooth-docker]
sd-webui-controlnet - WebUI extension for ControlNet
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.