x-stable-diffusion
sdui
x-stable-diffusion | sdui | |
---|---|---|
5 | 1 | |
547 | 38 | |
-0.4% | - | |
4.5 | 10.0 | |
5 months ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
x-stable-diffusion
-
[D] Is there an affordable way to host a diffusers Stable Diffusion model publicly on the Internet for "real-time"-inference? (CPU or Serverless GPU?)
Cheapest would be to deploy it on your own using: https://github.com/stochasticai/x-stable-diffusion. Let me if you need more help on real-time inference.
-
[D]deploy stable diffusion
However, I suggest you "accelerate" your inference first. For example, you can use open-source inference engines (see: https://github.com/stochasticai/x-stable-diffusion) to easily accelerate your inference 2x or more. That means you can generates 2x more images / $ on public clouds.
-
30% Faster than xformers? voltaML vs xformers stable diffusion - NVIDIA 4090
Brilliant, the x-stable-diffusion TensorRT/ AITemplate etc. sample image suggested they weren't consistent between the optimizations at all, unless they hadn't locked the seed which would have been foolish for the test.
-
Upto 2.5X speed up of Stable-diffusion/Dreambooth using one line of code with voltaML.
I was looking at this three days ago, the problem is there seems to be a huge difference in what is being generated looking at the example spread on https://github.com/stochasticai/x-stable-diffusion , whereas copying model, params, seed should be giving a near identical image.
- Using Tensor Cores for Deep Learning.
sdui
-
How to use other diffusers (k_euler) with various Mac SD forks?
I have the k-diffusion samplers working on M1 in my repo: https://github.com/harskish/sdui If you're comfortable with programming you could port the patches over.
What are some alternatives?
voltaML - ⚡VoltaML is a lightweight library to convert and run your ML/DL deep learning models in high performance inference runtimes like TensorRT, TorchScript, ONNX and TVM.
carefree-creator - AI magics meet Infinite draw board.
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
infinite-zoom-stable-diffusion - resources for creating Ininite zoom video using Stable Diffiusion, you can use multiple prompts and it is easy to use.
sd_dreambooth_extension
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
infery-examples - A collection of demo-apps and inference scripts for various deep learning frameworks using infery (Python).
ecco - Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0).
jukebox - Code for the paper "Jukebox: A Generative Model for Music"
labml - 🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱
stable-diffusion-webui - Stable Diffusion web UI
TensorRT - PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT