ROCm-docker
stable-diffusion
ROCm-docker | stable-diffusion | |
---|---|---|
3 | 382 | |
392 | 65,504 | |
1.0% | 1.1% | |
5.1 | 0.0 | |
23 days ago | 22 days ago | |
Shell | Jupyter Notebook | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ROCm-docker
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
https://rocm.docs.amd.com/projects/install-on-linux/en/lates... links to ROCm/ROCm-docker: https://github.com/ROCm/ROCm-docker which is the source of docker.io/rocm/rocm-terminal: https://hub.docker.com/r/rocm/rocm-terminal :
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video rocm/rocm-terminal
-
Stable Diffusion PR optimizes VRAM, generate 576x1280 images with 6 GB VRAM
Not sure about the 6600, but there is a guide for Linux at least:
https://m.youtube.com/watch?v=d_CgaHyA_n4&feature=emb_logo
And this is somehow relevant (possibly), as I kept the link open.
https://github.com/RadeonOpenCompute/ROCm-docker/issues/38
-
It's working perfectly under Linux
As for the Docker image, I suppose you could compile the image (https://hub.docker.com/r/rocm/pytorch) by yourself using the sources (https://github.com/RadeonOpenCompute/ROCm-docker#building-images), which seems to be quite a bit of work. Better, you could just use an older tag of the upstream image, eg. rocm4.1.1_ubuntu18.04_py3.6_pytorch instead of rocm4.2_ubuntu18.04_py3.6_caffe2 or latest . Just make sure your container version matches your host ROCm version.
stable-diffusion
-
Go is bigger than crab!
Which is a 1-click install of Stable Diffusion with an alternative web interface. You can choose a different approach but this one is pretty simple and I am new to this stuff.
-
Why & How to check Invisible Watermark
an invisible watermarking of the outputs, to help viewers identify the images as machine-generated.
-
How to create an Image generating AI?
It sounds like you just want to set up Stable Diffusion to run locally. I don't think your computer's specs will be able to do it. You need a graphics card with a decent amount of VRAM. Stable diffusion is in Python as is almost every AI open source project I've seen. If you can get your hands on a system with an Nvidia RTX card with as much VRAM as possible, you're in business. I have an RTX 3060 with 12 gigs of VRAM and I can run stable diffusion and a whole variety of open source LLMs as well as other projects like face swap, Roop, tortoise TTS, sadtalker, etc...
-
Two video cards...one dedicated to Stable Diffusion...the other for everything else on my PC?
Use specific GPU on multi GPU systems · Issue #87 · CompVis/stable-diffusion · GitHub
- Automatic1111 - Multiple GPUs
- Ist Google inzwischen einfach unbrauchbar?
-
Why are people so against compensation for artists?
I dealt with this in one of my posts. At least SD 1.1 till 1.5 are all trained on a batch size of 2048. The version pretty much everyone uses (1.5) is first pretrained at a resolution of 256x256 for 237K steps on laion2B-en, at the end of those training steps it will have seen roughly 500M images in laion2B-en. After that it is pre-trained for 194K steps on laion-high-resolution at a resolution of 512x512, which is a subset of 170M images from laion5B. Finally it is trained for 1.110K steps on LAION aesthetic v2 5+. This is easily verified by taking a glance at the model card of SD 1.5. Though that one doesn't specify for part of the training exactly which aesthetic set was used for part of the training, for that you have to look at the CompVis github repo. Thus at the end of it all both the most recent images and the majority of images will have come from LAION aesthetic v2 5+ (seeing every image approx 4 times). Realistically a lot of the weights obtained from pretraining on 2B will have been lost, and only provided a good starting point for the weights.
-
Is SDXL really open-source?
stable diffusion · CompVis/stable-diffusion@2ff270f · GitHub
- I want to ask the AI to draw me as a Pokemon anime character then draw six of Pokemon of my choice next to me. What are my best free, 15$ or under and 30$ or under choices?
-
how can i create my own ai image model
Here for example --> https://github.com/CompVis/stable-diffusion
What are some alternatives?
awesome-kubernetes - A curated list for awesome kubernetes sources :ship::tada:
GFPGAN - GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.
AiDungeon2-Docker-ROCm - Runs an AIDungeon2 fork in Docker on AMD ROCm hardware.
Real-ESRGAN - Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.
ZLUDA - CUDA on AMD GPUs
diffusers-uncensored - Uncensored fork of diffusers
stable-diffusion - Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it.
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
docker-elk - The Elastic stack (ELK) powered by Docker and Compose.
VQGAN-CLIP - Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
Dokku - A docker-powered PaaS that helps you build and manage the lifecycle of applications
onnx - Open standard for machine learning interoperability