a1111-sd-webui-tome
a1111-stable-diffusion-webui-vram-estimator
a1111-sd-webui-tome | a1111-stable-diffusion-webui-vram-estimator | |
---|---|---|
4 | 4 | |
49 | 106 | |
- | - | |
10.0 | 3.4 | |
10 months ago | 9 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
a1111-sd-webui-tome
- a1111 Cross attention v1.3.0.
-
ToMe (Token Merging) - on AUTOMATIC111 at all?
here how too install https://github.com/SLAPaper/a1111-sd-webui-tome
- What are your favorite Extensions?
-
4090 increased speed causing memory error
Good ole --opt-split-attention might give you some speed increases. Never treated me wrong. | Check out the optimizations page as well. --opt-sdp-attention seems to be faster than --opt-sdp-no-mem-attention, but less deterministic. Depends on what you're looking for. | You should also give xformers a whack. Here's a link to a precompiled wheel for python 3.10.9 and cuda 11.8. Some people have noted speed increases, some not. I've found Torch2.0 works just about as well by itself as xformers did on Torch1.13. Gave up on xformers for the night, but I'll definitely give it another test later. | Also, go grab the token merger extension. This thing cut my generation time by around a third. It's like witchcraft, I swear. No loss in quality/cohesion either.
a1111-stable-diffusion-webui-vram-estimator
-
A Simple 4-Step Workflow with Reference Only ControlNet or "How I stop prompting and love the ControlNet! "
VRAM Estimator Extension is a great little helper.
-
What determines how large you can generate an image?
Either way, if you're using auto 1111, there's an extension that let's you benchmark your image generation capabilites and estimate what you can handle with available VRAM.
-
How is VRAM used / allocated when using hires fix?
Look for the 'VRAM Estimator' extension. GitHub link
-
4090 increased speed causing memory error
Batch size is directly limited by how much VRAM you have. The size of the batch dictates how many images to process at once. The more images being processed, the more VRAM you'll need. My 1060 6GB for instance can only run a batch size of 4 at 512x768, but I can run a batch size of 2 at 768x768. And a batch size of 1 at 1024x1024. The more images you want to generate at once, the smaller they'll need to be. | There's this extension to estimate VRAM. It runs a series of tests to find out the limits of your card and when it runs out of memory. It was a bit jank when I ran it, but it might be what you're looking for. | There's always the old tried and true method of "keep pushing it till it breaks". That's how I found my card's limits.
What are some alternatives?
sd-webui-cutoff - Cutoff - Cutting Off Prompt Effect
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
stable-diffusion-webui-model-toolkit - A Multipurpose toolkit for managing, editing and creating models.
sd-webui-infinite-image-browsing - A fast and powerful image/video browser for Stable Diffusion webui / ComfyUI / Fooocus / NovelAI, featuring infinite scrolling and advanced search capabilities using image parameters. It also supports standalone operation.
sd-webui-controlnet - WebUI extension for ControlNet
sd-canvas-editor - A custom extension for sd-webui that integrated a full capability canvas editor which you can use layer, text, image, elements, etc
stable-diffusion-webui-state - Stable Diffusion extension that preserves ui state
sd_webui_SAG