diffusers-interpret
stable-diffusion-webui-feature-showcase
diffusers-interpret | stable-diffusion-webui-feature-showcase | |
---|---|---|
15 | 33 | |
259 | 974 | |
- | - | |
10.0 | 0.0 | |
over 1 year ago | 7 months ago | |
Jupyter Notebook | ||
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
diffusers-interpret
- Stable Diffusion links from around September 29, 2022 that I collected for further processing
-
Diffusers-Interpret 🤗🧨🕵️♀️ - Model explainability for 🤗 Diffusers
Check the project at https://github.com/JoaoLages/diffusers-interpret
- Diffusers-Interpret v0.4.0 is out! Explainability for Stable Diffusion
-
Can we please make a general update on all the "most important" news/repos available?
For those who want to explore what the denoising process looks like, check out the [diffusers-interpret package](https://github.com/JoaoLages/diffusers-interpret)! You can generate a GIF like [this one](https://github.com/TomPham97/diffuser/blob/main/diffusion-process.gif?raw=true).
-
Commas, How do they work?!
If you have lots of RAM the diffusers-interpreter is an explainability tool that can show exactly how much each token is beings weighted and which part of the image it is influencing.
-
[D] Senior research scientist at GoogleAI, Negar Rostamzadeh: “Can't believe Stable Diffusion is out there for public use and that's considered as ‘ok’!!!”
github.com/JoaoLages/diffusers-interpret
- Model explainability for 🤗 Diffusers. Get explanations for your generated images with the latest stable diffusion model!
- [P] Model explainability for 🤗 Diffusers. Get explanations for your generated images with the latest stable diffusion model!
stable-diffusion-webui-feature-showcase
- How to turn anime image to realistic image in stable diffusion?
- [Stable Diffusion] inversion textuelle avec AUTOMATIC1111 webui
- [Ainudes] Comment créer des nus IA ?
- Is there any documentation for Automatic1111 WebUI?
-
Is there a properly comprehensive guide on prompt syntax?
A1111 https://github.com/AUTOMATIC1111/stable-diffusion-webui-feature-showcase
-
Which one is the "official" version
Here's a quick rundown on a few of the most popular ones with links. I started out using CMDR2, which is very easy to get running as a newbie. Then I kind of graduated to NMKD because I wanted something a little more mainstream but still easy to use. Then, I finally decided I was hungry for all the strange and exotic bells and whistles that SD had to offer me, and so I installed Automatic1111. I also wanted something that would work well with my 4GB GTX 1650 laptop card, because that's considered "low ram" and kind of on the edge for running SD- Automatic1111 fit the bill there, too.
-
At your service...
All generations were on the "Berry's Mix" model, which is made by combining NAI-final, Zenith's F111, r34 and SD1.4 according to this recipe. I used 30ish steps when generating images and inpainting, but 70-80 steps when outpainting because I read here that outpainting really benefits from extra steps. When outpainting I would generate 2-4 versions and pick the least broken one, then tidy up with inpainting.
-
What's the name of this feature?
sounds like "outpainting", one of the very first features listed on 1111 repo with some instructions: https://github.com/AUTOMATIC1111/stable-diffusion-webui-feature-showcase
- How do you expand an image? (image to image)
-
Running neural networks locally.
I have no idea what you're talking about. Just get Automatic1111
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
CogVideo - Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
glid-3-xl-stable - stable diffusion training
diffusion-ui - Frontend for deeplearning Image generation
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
stable-diffusion
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM