x-stable-diffusion VS TensorRT

Compare x-stable-diffusion vs TensorRT and see what are their differences.

x-stable-diffusion

Real-time inference for Stable Diffusion - 0.88s latency. Covers AITemplate, nvFuser, TensorRT, FlashAttention. Join our Discord communty: https://discord.com/invite/TgHXuSJEk6 (by stochasticai)

TensorRT

PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT (by pytorch)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
x-stable-diffusion TensorRT
5 5
546 2,343
-0.5% 1.9%
4.5 9.5
5 months ago 6 days ago
Jupyter Notebook Python
Apache License 2.0 BSD 3-clause "New" or "Revised" License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

x-stable-diffusion

Posts with mentions or reviews of x-stable-diffusion. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-11-21.

TensorRT

Posts with mentions or reviews of TensorRT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-06.

What are some alternatives?

When comparing x-stable-diffusion and TensorRT you can also consider the following projects:

voltaML - ⚡VoltaML is a lightweight library to convert and run your ML/DL deep learning models in high performance inference runtimes like TensorRT, TorchScript, ONNX and TVM.

torch2trt - An easy to use PyTorch to TensorRT converter

AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.

onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

sd_dreambooth_extension

cutlass - CUDA Templates for Linear Algebra Subroutines

infery-examples - A collection of demo-apps and inference scripts for various deep learning frameworks using infery (Python).

onnx-simplifier - Simplify your onnx model

jukebox - Code for the paper "Jukebox: A Generative Model for Music"

TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

sdui - Local ImGui UI for Stable Diffusion. Features embedded PNG metadata, Apple M1 fixes, result caching, img2img, and more!

transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀