WIP - TensorRT accelerated stable diffusion img2img from mobile camera over webrtc + whisper speech to text. Interdimensional cable is here! Code: https://github.com/venetanji/videosd

This page summarizes the projects mentioned and recommended in the original post on /r/StableDiffusion

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • TensorRT

    NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

    It uses the nvidia demo code from: https://github.com/NVIDIA/TensorRT/tree/main/demo/Diffusion

  • Radiata

    Stable diffusion webui based on diffusers.

    If you just want an accelerated ui, you can check https://github.com/ddPn08/Lsmith/ or https://github.com/VoltaML/voltaML-fast-stable-diffusion which also use the same origina nvidia code. These projects don't do img2img though, you can check in my repo for the img2img pipeline if you need. You need to compile the tensorrt engines for the models first. There are a few steps you can check in their script: export onnx, optimize onnx, compile engine for optimized onnx. I streamlined that a bit and I normally just run my compile.py in docker to build engines.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

  • voltaML-fast-stable-diffusion

    Beautiful and Easy to use Stable Diffusion WebUI

    If you just want an accelerated ui, you can check https://github.com/ddPn08/Lsmith/ or https://github.com/VoltaML/voltaML-fast-stable-diffusion which also use the same origina nvidia code. These projects don't do img2img though, you can check in my repo for the img2img pipeline if you need. You need to compile the tensorrt engines for the models first. There are a few steps you can check in their script: export onnx, optimize onnx, compile engine for optimized onnx. I streamlined that a bit and I normally just run my compile.py in docker to build engines.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts