stable-diffusion-rocm
onnx
Our great sponsors
stable-diffusion-rocm | onnx | |
---|---|---|
5 | 38 | |
57 | 16,858 | |
- | 2.4% | |
0.0 | 9.5 | |
about 1 year ago | 3 days ago | |
Dockerfile | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-rocm
-
[D] About the current state of ROCm
Re: stable diffusion https://github.com/AshleyYakeley/stable-diffusion-rocm
-
It's time to upscale FSR 2 even further: Meet FSR 2.1
Very easy actually. This is not officially documented, but with a recent enough kernel you don't have to install anything. You can grab the official rocm container and it'll just work. For example for Stable Diffusion see https://github.com/AshleyYakeley/stable-diffusion-rocm/blob/...
-
Running Stable Diffusion on Your GPU with Less Than 10Gb of VRAM
I had good luck with these directions, which let you run inside a docker container:
https://github.com/AshleyYakeley/stable-diffusion-rocm
I had to make the one line change suggested in issue #3 to get it to run under 8GB.
radeontop suggests 4GB might work.
I also had to add this environment variable to make it work on my unsupported radeon 6600xt:
HSA_OVERRIDE_GFX_VERSION=10.3.0
It takes under two minutes per batch of 5 images with the --turbo option.
(Base OS is manjaro; using the distro's version of docker; not the flatpack docker package.)
If you don't have a GPU, paperspace will rent you an appropriate VM.
-
Run Stable Diffusion on Your M1 Mac’s GPU
I have it working on an RX 6800, used the scripts from this repo[0] to build a docker image that has ROCm drivers and PyTorch installed.
I'm running Ubuntu 22.04 LTS as the host OS, didn't have to touch anything beyond the basic Docker install. Next step is build a new Dockerfile that adds in the Stable Diffusion WebUI.[1]
[0] https://github.com/AshleyYakeley/stable-diffusion-rocm
- Dockerfile for easy use on an AMD GPU
onnx
- Onyx, a new programming language powered by WebAssembly
-
From Lab to Live: Implementing Open-Source AI Models for Real-Time Unsupervised Anomaly Detection in Images
Once your model has been trained and validated using Anomalib, the next step is to prepare it for real-time implementation. This is where ONNX (Open Neural Network Exchange) or OpenVINO (Open Visual Inference and Neural network Optimization) comes into play.
-
Object detection with ONNX, Pipeless and a YOLO model
ONNX is an open format from the Linux Foundation to represent machine learning models. It is becoming extensively adopted by the Machine Learning community and is compatible with most of the machine learning frameworks like PyTorch, TensorFlow, etc. Converting a model between any of those formats and ONNX is really simple and can be done in most cases with a single command.
-
38TB of data accidentally exposed by Microsoft AI researchers
ONNX[0], model-as-protosbufs, continuing to gain adoption will hopefully solve this issue.
[0] https://github.com/onnx/onnx
-
Reddit’s LLM text model for Ads Safety
Running inference for large models on CPU is not a new problem and fortunately there has been great development in many different optimization frameworks for speeding up matrix and tensor computations on CPU. We explored multiple optimization frameworks and methods to improve latency, namely TorchScript, BetterTransformer and ONNX.
-
Operationalize TensorFlow Models With ML.NET
ONNX is a format for representing machine learning models in a portable way. Additionally, ONNX models can be easily optimized and thus become smaller and faster.
-
Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
I would say onnx.ai [0] provides more information about ONNX for those who aren’t working with ML/DL.
[0] https://onnx.ai
-
Does ONNX Runtime not support Double/float64?
It's not clear why you thing this sub is appropriate for some third party system with a Python interface. Why don't you try their discussion group: https://github.com/onnx/onnx/discussions
-
Async behaviour in python web frameworks
This kind of indirection through standardisation is pretty common to make compatibility between different kinds of software components easier. Some other good examples are the LSP project from Microsoft and ONNX to represent machine learning models. The first provides a standard so that IDEs don't have to re-invent the weel for every programming language. The latter decouples training frameworks from inference frameworks. Going back to WSGI, you can find a pretty extensive rationale for the WSGI standard here if interested.
- Pickle safety in Python
What are some alternatives?
stable-diffusion
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
stable_diffusion.openvino
stable-diffusion-webui - Stable Diffusion web UI
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
3d-ken-burns - an implementation of 3D Ken Burns Effect from a Single Image using PyTorch
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
stable-diffusion - A latent text-to-image diffusion model
stable-diffusion
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]