stable-diffusion
stable-diffusion
Our great sponsors
stable-diffusion | stable-diffusion | |
---|---|---|
142 | 186 | |
2,438 | 3,147 | |
- | - | |
9.8 | 0.0 | |
over 1 year ago | 7 months ago | |
Jupyter Notebook | Jupyter Notebook | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion
- [Stable Diffusion] Aide nécessaire à l'augmentation de la taille du fichier maximum sur l'installation locale
- [Machine Learning] [P] Exécutez une diffusion stable sur le GPU de votre M1 Mac
- Its time!
-
Anybody running SD on a Macbook Pro? What are you using and how did you install it?
Yes, you can install it with Python! https://github.com/lstein/stable-diffusion works with macOS, and you can control all the common parameter via their WebUI or CLI :)
-
How do I save the arguments for images I create when using the terminal? (Apple M1 Pro)
I'm using lstein fork ("dream") and when I create an image from the terminal, it also writes back to the terminal like this:
- I Resurrected “Ugly Sonic” with Stable Diffusion Textual Inversion
-
AI Seamless Texture Generator Built-In to Blender
> Whenever I ask for something like ‘seamless tiling xxxxxx’ it kinda sorta gets the idea, but the resulting texture doesn’t quite tile right.
Getting seamless tiling requires more than just have "seamless tiling" in the prompt. It also depends on if the fork you're using has that feature at all.
https://github.com/lstein/stable-diffusion has the feature, but you need to pass it outside the prompt. So if you use the `dream.py` prompt cli, you can pass it `"Hats on the ground" --seamless` and it should be perfectly tilable.
-
Auto SD Workflow - Update 0.2.0 - "Collections", Password Protection, Brand new UI + more
From https://github.com/lstein/stable-diffusion
-
Stable Diffusion GUIs for Apple Silicon
Stable Diffusion Dream Script: This is the original site/script for supporting macOS. I found this soon after Stable Diffusion was publicly released and it was the site which inspired me to try out using Stable Diffusion on a mac. They have a web-based UI (as well as command-line scripts) and a lot of documentation on how to get things working.
-
Still can't believe this technology is real. My talentless 2 minute sketch on the left.
I’m pretty sure it works for M2 as well - basically the newer ARM-based Macs. The instructions to get it working are detailed! https://github.com/lstein/stable-diffusion
stable-diffusion
-
Possible to load Civitai models in basujindal optimizedSD fork?
I am using this repo: https://github.com/basujindal/stable-diffusion and it works fine with e.g. this model: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original
-
40min to render 2x 256x256 pictures ..
That includes this optimized version : https://github.com/basujindal/stable-diffusion
- [Stable Diffusion] Coincé chez Unet: courir en mode EPS-Prédiction
-
How to use safetensors locally (optimized-sd)?
Ah, I wasn't aware of that. I use this version, which was very easy to set up and use by CLI.
-
[Stable Diffusion] stabile Diffusion 1.4 - CUDA-Speicherfehler
Used repo recommended in https://github.com/CompVis/stable-diffusion/issues/39 to use https://github.com/basujindal/stable-diffusion - same result.
- [Stable Diffusion] Aide avec Cuda hors de mémoire
- [Stable Diffusion] Comment créer notre propre modèle?
-
Help installing optimisedSD please. Thank you so much!
As per the best solution I found, I have download this (https://github.com/basujindal/stable-diffusion) version and pasted the optimizedSD folder in the main (user>stable-diffusion-webui) folder as per site instruction.
-
Stable Diffusion Web UI: Using Optimized SD Post-Installation
The git says you can simply grab the OptimizedSD folder and paste it into the installation path, which I did. However, I'm not sure how to call upon its functionality. Again, the reddit post says >Remember to call the optimized python script python optimizedSD/optimized_txt2img.py instead of standard scripts/txt2img. Though I'm not even sure where that script call is performed. Any ideas? Thanks in advance!
- [Stablediffusion] diffusion stable 1.4 - Erreur CUDA de mémoire insuffisante
What are some alternatives?
waifu-diffusion - stable diffusion finetuned on weeb stuff
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
stable-diffusion
stable-diffusion-webui - Stable Diffusion web UI
diffusers-uncensored - Uncensored fork of diffusers
chaiNNer - A node-based image processing GUI aimed at making chaining image processing tasks easy and customizable. Born as an AI upscaling application, chaiNNer has grown into an extremely flexible and powerful programmatic image processing application.
txt2imghd - A port of GOBIG for Stable Diffusion
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
dream-textures - Stable Diffusion built-in to Blender