x-stable-diffusion
Real-time inference for Stable Diffusion - 0.88s latency. Covers AITemplate, nvFuser, TensorRT, FlashAttention. Join our Discord communty: https://discord.com/invite/TgHXuSJEk6 (by stochasticai)
voltaML
⚡VoltaML is a lightweight library to convert and run your ML/DL deep learning models in high performance inference runtimes like TensorRT, TorchScript, ONNX and TVM. (by VoltaML)
Our great sponsors
x-stable-diffusion | voltaML | |
---|---|---|
5 | 5 | |
548 | 1,184 | |
0.0% | 0.0% | |
4.5 | 10.0 | |
5 months ago | over 1 year ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
x-stable-diffusion
Posts with mentions or reviews of x-stable-diffusion.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-11-21.
-
[D] Is there an affordable way to host a diffusers Stable Diffusion model publicly on the Internet for "real-time"-inference? (CPU or Serverless GPU?)
Cheapest would be to deploy it on your own using: https://github.com/stochasticai/x-stable-diffusion. Let me if you need more help on real-time inference.
-
[D]deploy stable diffusion
However, I suggest you "accelerate" your inference first. For example, you can use open-source inference engines (see: https://github.com/stochasticai/x-stable-diffusion) to easily accelerate your inference 2x or more. That means you can generates 2x more images / $ on public clouds.
-
30% Faster than xformers? voltaML vs xformers stable diffusion - NVIDIA 4090
Brilliant, the x-stable-diffusion TensorRT/ AITemplate etc. sample image suggested they weren't consistent between the optimizations at all, unless they hadn't locked the seed which would have been foolish for the test.
-
Upto 2.5X speed up of Stable-diffusion/Dreambooth using one line of code with voltaML.
I was looking at this three days ago, the problem is there seems to be a huge difference in what is being generated looking at the example spread on https://github.com/stochasticai/x-stable-diffusion , whereas copying model, params, seed should be giving a near identical image.
- Using Tensor Cores for Deep Learning.
voltaML
Posts with mentions or reviews of voltaML.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-11-21.
- Very first testing version of voltaML is out (giant speed increase)
- VoltaML – convert DL models in high performance inference runtimes
-
[R] Upto 2.5X speed up of Stable-diffusion/Dreambooth using one line of code with voltaML.
Please follow here-> https://github.com/VoltaML/voltaML
-
Upto 2.5X speed up of Stable-diffusion/Dreambooth using one line of code with voltaML.
Follow us here to get updates on the SD acceleration -> https://github.com/VoltaML/voltaML
- [R] Open source inference acceleration library - voltaML
What are some alternatives?
When comparing x-stable-diffusion and voltaML you can also consider the following projects:
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
sd_dreambooth_extension
jukebox - Code for the paper "Jukebox: A Generative Model for Music"
infery-examples - A collection of demo-apps and inference scripts for various deep learning frameworks using infery (Python).
stable-diffusion-webui - Stable Diffusion web UI
sdui - Local ImGui UI for Stable Diffusion. Features embedded PNG metadata, Apple M1 fixes, result caching, img2img, and more!
TensorRT - PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
x-stable-diffusion vs AITemplate
voltaML vs AITemplate
x-stable-diffusion vs sd_dreambooth_extension
voltaML vs sd_dreambooth_extension
x-stable-diffusion vs jukebox
voltaML vs jukebox
x-stable-diffusion vs infery-examples
voltaML vs stable-diffusion-webui
x-stable-diffusion vs sdui
x-stable-diffusion vs stable-diffusion-webui
x-stable-diffusion vs TensorRT