Speed Up Stable Diffusion by ~50% Using Flash Attention

This page summarizes the projects mentioned and recommended in the original post on /r/StableDiffusion

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • flash-attention

    Fast and memory-efficient exact attention

  • No, that's not it. Just need to install this library and change a tiny bit of the code. Shouldn't be a problem.

  • xformers

    Hackable and optimized Transformers building blocks, supporting a composable construction.

  • I see. Are you using one of the webui's? Can you share some information on how to install xformers on linux? I'm assuming this is the starting point, it would be nice to install it on Automatic's venv as that's what I'm running now on wsl, just need to figure out the dependency and how/where to install it.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • diffusers

    🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts