stable-diffusion-webui-directml
sd-webui-segment-anything
stable-diffusion-webui-directml | sd-webui-segment-anything | |
---|---|---|
74 | 17 | |
1,577 | 3,224 | |
- | - | |
9.9 | 6.3 | |
7 days ago | 15 days ago | |
Python | Python | |
GNU Affero General Public License v3.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-directml
- stable diffusion compliant with amd gpu or not?
-
RuntimeError: Could not allocate tensor with 4915840 bytes. There is not enough GPU video memory available!
I'm getting this error using (https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues), freshly installed. I'm running it on an AMD RX 6700 XT with 12gb of vram. Generating a single image at default settings (512x512, 20 steps, etc.) I can do simple prompts (i.e. "kitty cat") but as soon as I add a couple more tags, I get the aforementioned error message, usually 20-30% into generating an image. I went through this thread (https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/38) and tried every solution I saw, most of them being variations of adding --medvram --precision full --no-half --no-half-vae --opt-split-attention-v1 --opt-sub-quad-attention --disable-nan-check to the commandline arguments. What else might I be able to try? Thanks.
-
Best AMD SD Guide for 2023?
i use automatic 1111 you can find the installation on the github , this branch https://github.com/lshqqytiger/stable-diffusion-webui-directml and it works fine although the speed is what it is, i also have a old GPU.
-
Just how much VRAM do I need? It keeps saying I don't have enough with a 7900xt.
I'm using this one: https://github.com/lshqqytiger/stable-diffusion-webui-directml
-
I am confused regarding same seed = same picture. Any explanations or insights? The journey for this in comments.
- https://github.com/lshqqytiger/stable-diffusion-webui-directm - starting webuser-ui with COMMANDLINE_ARGS=--opt-sub-quad-attention –disable-nan-check - AMD 8GB Radeon Pro WX7100
- ¿Quién fue a la marcha contra la IA en el Obelisco? Cuenten cómo estuvo
-
StableDiffusion will only use my CPU?
I'm running this fork (https://github.com/lshqqytiger/stable-diffusion-webui-directml) on a pc with a Ryzen 5700x and a Radeon RX 6700 XT 12 GB Video Card.
-
(AMD) Random Running Out of Memory Error After Generation
I am using direct-ml fork
-
Stable Diffusion on AMD 6900XT is Super Slow
Im running Stable diffusion on my 6900XT, and I feel like its way slower than normal. Using the updated Webui https://github.com/lshqqytiger/stable-diffusion-webui-directml.
-
Stable Diffusion DirectML on AMD APU only (no external GPU) - Ram Usage?
This refers to the use of iGPUs (example: Ryzen 5 5600G). No graphic card, only an APU. The DirectML Fork of Stable Diffusion (SD in short from now on) works pretty good with AMD. But not only with the GPUs, but also with only-APUs without GPUs.
sd-webui-segment-anything
-
Textual inversion. The best way to prepare photos of a person?
One idea would be to use Segment Anything to cut out the character/face from the background and then replace with random backgrounds that you generate with stable diffusion. Here's an extension for Automatic1111 :) https://github.com/continue-revolution/sd-webui-segment-anything
-
How hard is it to "code" a tool based on segment-anything and Stable diffusion ?
Checkout this code https://github.com/continue-revolution/sd-webui-segment-anything
- Can I use Interrogate CLIP or something similar to get image position data?
- Best way to mask images automatically?
-
Information is currently available.
Segment anything is the extension that you're looking for.
- What's your favorite small tweaks to make? I'll go first
-
Show HN: Image background removal without annoying subscriptions
If anyone is already running auto1111, or simply uninterested in paying, there's an addon that does this very well available here https://github.com/KutsuyaYuki/ABG_extension, additionally I've had very good results using the masks generated by Facebook's SAM, which is also available as an addon here https://github.com/continue-revolution/sd-webui-segment-anyt...
- The main reason why people will keep using open source vs Photoshop and other big-tech generative AIs
-
Stable Diffusion + Segment Anything App and Tutorial
There’s an A111 extension already that I think does the same thing (I’ve had it installed for a few weeks now). https://github.com/continue-revolution/sd-webui-segment-anything
-
YourVision: Stable Diffusion + Segment Anything
use this and inpainting https://github.com/continue-revolution/sd-webui-segment-anything
What are some alternatives?
SHARK - SHARK - High Performance Machine Learning Distribution
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
StableDiffusionUI - Stable Diffusion UI: Diffusers (CUDA/ONNX)
stable-diffusion-webui-rembg - Removes backgrounds from pictures. Extension for webui.
sd-webui-controlnet - WebUI extension for ControlNet
ddetailer
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
segment-anything - The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
stable-diffusion-webui - Stable Diffusion web UI
Auto-Photoshop-StableDiffusion-Plugin - A user-friendly plug-in that makes it easy to generate stable diffusion images inside Photoshop using either Automatic or ComfyUI as a backend.
OnnxDiffusersUI - UI for ONNX based diffusers
sd-webui-segment-everything - Segment Anything for Stable Diffusion Webui [Moved to: https://github.com/continue-revolution/sd-webui-segment-anything]