-
Stable-Diffusion-ONNX-FP16
Example code and documentation on how to get Stable Diffusion running with ONNX FP16 models on DirectML. Can run accelerated on all DirectML supported cards including AMD and Intel.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Fastest: SHARK (can do 4.6it/s) but you'll have to deal with it changing a lot and (for now) a custom driver. https://github.com/nod-ai/SHARK/blob/main/shark/examples/shark_inference/stable_diffusion/stable_diffusion_amd.md
NOTE:
The number of mentions on this list indicates mentions on common posts plus user suggested alternatives.
Hence, a higher number means a more popular project.
Related posts
-
Llama 2 on ONNX runs locally
-
Amd Gpu not utilised
-
7900 XTX Stable Diffusion Shark Nod Ai performance on Windows 10. Seem to have gotten a bump with the latest prerelease drivers 23.10.01.41
-
New here
-
after changing GPU from RX 470 4gb to RTX 3060 12GB, I decided to make a few cozy houses, and these are a few of them