another UI for Stable Diffusion for Windows and AMD, now with LoRA and Textual Inversions

This page summarizes the projects mentioned and recommended in the original post on /r/StableDiffusion

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • onnx-web

    web UI for GPU-accelerated ONNX pipelines like Stable Diffusion, even on Windows and AMD

  • For a workaround: You can snag a copy from https://github.com/ssube/onnx-web/blob/main/api/extras.json. If you put that in the same directory as models/outputs/server and append --extras %ONNX_WEB_BASE_PATH%\extras.json to the line starting with server\onnx-web.exe --diffusion --correction ... in the batch script (onnx-web-half.bat and/or onnx-web-full.bat), it should work. A bunch of models I have tested, in no particular order: https://gist.github.com/ssube/bcb32781dc848d79657730e6ac540efd

  • diffusers

    🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.

  • Minutes vs hours, for the combination that originally drove me to build this, Windows and AMD. For Nvidia, it seems pretty comparable but ONNX can sometimes be faster? The best measurements I took for timing are in https://github.com/huggingface/diffusers/pull/2158, where upscaling a 128x128 image to 512x512 went from 2m28s on CPU to 42 seconds on Windows/DirectML and only 7 seconds on Linux/ROCm (which is really interesting). There are also some internal comparisons in https://github.com/ssube/onnx-web/blob/main/BENCHMARK.md, and https://github.com/microsoft/onnxruntime/tree/main/onnxruntime/python/tools/transformers/models/stable_diffusion#stable-diffusion-cuda-optimization has a whole pile of numbers.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • Stable-Diffusion-ONNX-FP16

    Example code and documentation on how to get Stable Diffusion running with ONNX FP16 models on DirectML. Can run accelerated on all DirectML supported cards including AMD and Intel.

  • Yes, the FP16 stuff in this latest release is a big part of that, and should tentatively support 8GB cards. I haven't pushed things quite as far as https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16 yet, but support for 4GB cards is possible and we've been discussing how it works: https://github.com/ssube/onnx-web/issues/241

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • OnnxStack v0.9.0 Released - Realtime Stable Diffusion [Windows] [NoPython]

    1 project | /r/StableDiffusion | 25 Nov 2023
  • v0.8.0 of OnnxStack released - Image Batch Processing added

    1 project | /r/StableDiffusion | 18 Nov 2023
  • Photoshot: an open-source AI avatar generator Next.js app (now with app router)

    1 project | /r/nextjs | 24 Oct 2023
  • Opendream: A Layer Based Stable Diffusion Web UI

    1 project | /r/StableDiffusion | 17 Aug 2023
  • Opendream: A layer-based UI for Stable Diffusion

    1 project | /r/hypeurls | 17 Aug 2023