stable-diffusion-webui-docker
openvino
stable-diffusion-webui-docker | openvino | |
---|---|---|
58 | 17 | |
6,045 | 5,962 | |
- | 3.8% | |
6.3 | 10.0 | |
8 days ago | 4 days ago | |
Shell | C++ | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-docker
-
‘Nudify’ Apps That Use AI to ‘Undress’ Women in Photos Are Soaring in Popularity
I also use the Stable Diffusion WebUI Docker, I found it really easy to set up.
-
ComfyUI docker images
I'm currently using this docker setup: https://github.com/AbdBarho/stable-diffusion-webui-docker
-
Can't figure out how to add new models to docker install of Automatic1111
I used the Docker install from here. It was easy to get Automatic1111's web interface up and running, but I'm trying to add new models and I can't figure out how to do it.
-
A1111 model folders in WSL
Docker ftw
- What Stable Diffusion local install or online would you recommend/is your favorite?
-
Infill on large images in Automatic1111 webui
I'm using A1111 via https://github.com/AbdBarho/stable-diffusion-webui-docker which was such a perfectly simple way to get set up, and I've been having an absolute blast. Turns out that for the most part I do not even need to leverage the 24GB vram on my 3090.
- Midjourney is getting ridiculous with the prompts they're banning. Agree/disagree?
- Synthetic data generation for model training · Issue #350 · CompVis/stable-diffusion
- What is the best alternative to midjourney?
- [Self Hosted] Je recherche un tutoriel à jour pour installer une diffusion stable sur proxmox? Y a-t-il une telle chose là-bas?
openvino
- FLaNK Stack 05 Feb 2024
- QUIK is a method for quantizing LLM post-training weights to 4 bit precision
- Intel OpenVINO 2023.1.0 released
- Intel OpenVINO 2023.1.0 released, open-source toolkit for optimizing and deploying AI inference
- OpenVINO 2023.1.0 released
- [N] Intel OpenVINO 2023.1.0 released, open-source toolkit for optimizing and deploying AI inference
-
Powering Anomaly Detection for Industry 4.0
Anomalib is an open-source deep learning library developed by Intel that makes it easy to benchmark different anomaly detection algorithms on both public and custom datasets, all by simply modifying a config file. As the largest public collection of anomaly detection algorithms and datasets, it has a strong focus on image-based anomaly detection. It’s a comprehensive, end-to-end solution that includes cutting-edge algorithms, relevant evaluation methods, prediction visualizations, hyperparameter optimization, and inference deployment code with Intel’s OpenVINO Toolkit.
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
lxc-gpu - Enjoy computation resources sharing at your laboratory with lxc-gpu!
deepsparse - Sparsity-aware deep learning inference runtime for CPUs
fast-stable-diffusion - fast-stable-diffusion + DreamBooth
mediapipe - Cross-platform, customizable ML solutions for live and streaming media.
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
stable-diffusion - Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it.
stable-diffusion-docker - Run the official Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint.
neural-compressor - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
rocm-gfx803
nebuly - The user analytics platform for LLMs