yasd-discord-bot
stable-diffusion
yasd-discord-bot | stable-diffusion | |
---|---|---|
14 | 20 | |
112 | 338 | |
- | - | |
10.0 | 0.0 | |
over 1 year ago | over 1 year ago | |
Python | Jupyter Notebook | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
yasd-discord-bot
-
Discord bot?
There's always my bot: https://github.com/AmericanPresidentJimmyCarter/yasd-discord-bot
-
Outpainting and inpainting (via clipseg) with the latest RunwayML 1.5 weights and VAE is out of beta and live on the LAION Discord Server
YASD-Discord-Bot
-
Who needs prompt2prompt anyway? SD 1.5 inpainting model with clipseg prompt for "hair" and various prompts for different hair colors
clipseg is an image segmentation method used to find a mask for an image from a prompt. I implemented it as an executor for dalle-flow and added it to my bot yasd-discord-bot.
- People who use Unstable diffusion on discord
- API?
-
Sequential token weighting invented by Birch-san@Github allows you to bypass the 77 token limit and use any amount of tokens you want, also allows you to sequentially alter an image
Merged into [dalle-flow](https://github.com/jina-ai/dalle-flow/pull/112) this morning and works on my Discord bot [yasd-discord-bot](https://github.com/AmericanPresidentJimmyCarter/yasd-discord-bot).
-
Can I remotely run stable diffusion on my computer but access it from my phone?
personally i run a discord bot for myself. its much more convenient to be able to look at your run history and use a good UI for generation. https://github.com/AmericanPresidentJimmyCarter/yasd-discord-bot/
- Multi-subprompt positive/negative weights and SD concepts library live on YASD Discord Bot at the LAION Discord Server
- YASD Discord Bot updated with experimental "outriffing" that allows you to img2img to different sizes, docker image instructions
- Free and Open Source Stable Diffusion Bot featuring Array Prompts, Interpolation, and more!
stable-diffusion
- [Machine Learning] [P] Exécutez une diffusion stable sur le GPU de votre M1 Mac
- High-performance image generation using Stable Diffusion in KerasCV
-
Charl-e: “Stable Diffusion on your Mac in 1 click”
SD on an Intel mac with Vega graphics runs pretty well though — I think it ran at something like ~3-5 iterations/s for me, which is decent. I ran either https://github.com/magnusviri/stable-diffusion or https://github.com/lstein/stable-diffusion which have MPS support
-
Stable Diffusion PR optimizes VRAM, generate 576x1280 images with 6 GB VRAM
https://github.com/magnusviri/stable-diffusion/commit/d0b168...
Copying this change fixed seeds on M1 for me.
-
Intel Mac User, How do I start?
You should be able to run it on a CPU. Maybe try this version. If MPS is supported on your Mac you can check this out.
-
[P] Run Stable Diffusion on your M1 Mac’s GPU
A group of open source hackers forked Stable Diffusion on GitHub and optimized the model to run on Apple's M1 chip, enabling images to be generated in ~ 15 seconds (512x512 pixels, 50 diffusion steps).
-
Run Stable Diffusion on Your M1 Mac’s GPU
Magnusviro [0], the original author of the SD M1 repo credited in this article, has merged his fork into the Lstein Stable Diffusion repo [1], and you can now run Lstein fork with M1 as of a few hours ago.
This adds a ton of functionality - GUI, Upscaling & Facial improvements, weighted subprompts etc.
This has been a big undertaking over the last few days, and I highly recommend checking it out.
[0] https://github.com/magnusviri/stable-diffusion
-
How are Mac people using Windows for A.I. stuff?
You can run it on an M1. Using a macbook M1 pro max with 32Gb I get 512x512 in about 50 seconds. use this branch https://github.com/magnusviri/stable-diffusion/tree/apple-mps-support
-
ResolvePackageNotFound
I had this error too, and I tried a ton of things to get cudatoolkit to install, without any luck. This fork has an environment-mac.yml file that actually got it working on my M1 Max: https://github.com/magnusviri/stable-diffusion/tree/apple-silicon-mps-support
-
If I set a seed value and re-run using the exact same settings, should I get the same image back each time?
But when I run it (locally, using the Mac M1 port), every time I run it creates a different image.
What are some alternatives?
discord-stable-diffusion - A neat Discord bot to run Stable Diffusion locally
openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
rocm-build - build scripts for ROCm
stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI
stable-diffusion - Latent Text-to-Image Diffusion
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
stable-diffusion - A latent text-to-image diffusion model
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]