stable-diffusion
stable-diffusion
stable-diffusion | stable-diffusion | |
---|---|---|
40 | 111 | |
594 | 1,749 | |
- | - | |
0.0 | 10.0 | |
over 1 year ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
GNU General Public License v3.0 or later | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion
- Stable Diffusion links from around September 12, 2022 that I collected for further processing
- Stable Diffusion links from around September 16, 2022 that I collected for further processing
-
Can't install neonsecret's fork
1. git clone https://github.com/neonsecret/stable-diffusion 2. pip install --upgrade -r requirements.txt 3. conda env create -f environment.yaml
- AI Art: Dantooine Jedi Enclave, Unimaginably cool I can make fanart for any game
-
Please recommend a way to run SD on 4GB Nvidia on Ubuntu
neonsecret's fork is the only one I can get to run on my 4gb GeForce GTX 1050 Ti. I also use OptiomizedSD "just" the optimizedsd scripts folder copied over to neonsecrets. I've never been able to get automatic1111's fork to work for me.
-
Everything has worked flawlessly so far except this command. Any idea as to what the issue might be?
You can also clone neonsecret's version of optimized repository, if you want a better GUI, or use Arki's guide for AUTOMATIC1111's repo, which also has an optimized mode, and is pretty feature-packed.
-
Why can't I use Stable Diffusion?
sd gui
-
The first 4k picture ever produced by neural networks
Hey guys, today I produced the first ever 4k image using this: https://github.com/neonsecret/stable-diffusion/
-
Best GUI overall?
https://github.com/neonsecret/stable-diffusion/ https://github.com/neonsecret/neonpeacasso I have two of those, for both low-end and high-end GPUs
-
Literally 4k (3840x2176)
using https://github.com/neonsecret/stable-diffusion
stable-diffusion
-
PSA: You can run your GPU's at 80% power and get the same rendering speeds while saving heat/fan noise/electricity
use or update this one : https://github.com/hlky/stable-diffusion it has all the samplers, and if you want perfect faces, try k_euler_a
-
"a software developer after fixing a bug", by DALL-E 2
try this one https://github.com/hlky/stable-diffusion you need at least a 1050 to run it tho
- Which is the best fork out there ?
-
At the end of my rope on hlky fork, can anyone recommend any alternative GUI forks I could switch to?
https://github.com/hlky/stable-diffusion/issues/153 With 36 comments and tons of before and after comparisons, which are now deleted
-
CUDA memory error with hlky repo, (4GB Nvidia) - any ideas?
I wanted to try hlky version (https://github.com/hlky/stable-diffusion) , due to the WebUI and integration with upscaling models. It should also have the option to be optimized for low VRAM. To avoid getting a green square I have to add the parameters "--precision full --no-half". When I run a prompt, even with the smallest image size, I immediately get a CUDA memory error. Interestingly, without these parameters there isn't any memory error (but, of course, the result is a green square)
-
Fallout 5: Toronto (created with AI)
Made using https://github.com/hlky/stable-diffusion
-
Just released a Colab notebook that combines Craiyon+Stable Diffusion
Any chance to get this integrated into something like hlky's web ui?
-
AI Tekst til bilde: Elg og stavkirke med nordlys over Norsk flagg i bakgrunnen [OC] Mer detaljer i posten
Linux Guide her. Jeg har også Linux, men jeg valgte å sette det opp på Windows boksen min fordi driverne til Nvidia kortet på Linux ikke er helt sammarbeidsvillig når det kommer til å justere viftene etter sensorene i kortet (så jeg må sette det manuelt).
-
Using GFPGAN for only the eyes?
I'm seeing GFPGAN essentially remove all texture from faces, and I only want to use it on the eyes. Any thoughts on how to do this? I am using hlky/stable-diffusion now but I have no issues running a different repo/fork if needed and using command line.
- What's the best install of Stable Diffusion right now?
What are some alternatives?
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
diffusers-uncensored - Uncensored fork of diffusers
stable-diffusion-rocm
stable-diffusion-krita-plugin
stable-diffusion-webui - Stable Diffusion web UI
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
stable-diffusion
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
stable_diffusion.openvino
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]