Is SDXL supposed to be this slow on my system?

This page summarizes the projects mentioned and recommended in the original post on /r/StableDiffusion

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • automatic

    SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models

  • I found this thread on GitHub talking about how this was fixed in the latest version with an optional setting. I tried enabling it, as they mentioned, but it just resulted in an immediate CUDA out of memory error when starting generation. So it seems I'm actually needing the shared memory, which I assume is my issue.

  • stable-diffusion-webui

    Stable Diffusion web UI

  • The Automatic setting in cross attention defaults to doggettx if it's the first option and you run out of VRAM. Select either Xformers or SDP-no-mem and restart A1111. Also, you'll need to use the medvram SDXL switch as specified here https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Optimum-SDXL-Usage --medvram-sdxl --xformers

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • RuinedFooocus

    Focus on prompting and generating

  • Fooocus and u/runew0lf's RuinedFooocus are great options, but if you want to go deep, it's either A1111, SD Next or Comfy.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts