stable-diffusion
cog-stable-diffusion
stable-diffusion | cog-stable-diffusion | |
---|---|---|
17 | 26 | |
1,408 | 338 | |
- | - | |
2.9 | 10.0 | |
4 months ago | over 1 year ago | |
Jupyter Notebook | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion
-
Is it possible to merge VAEs?
Download this training project: git clone https://github.com/justinpinkney/stable-diffusion.git
-
how can i install this Image mixer onto automatic1111's webui.?
looks like it's using the https://github.com/justinpinkney/stable-diffusion/blob/4ac995b6f663b74dfe65400285e193d4167d259c/scripts/gradio_image_mixer.py to do the bulk of the work meaning the core functionality is built into stable diffusion, seems the UI just isn't built to support it. Their ckpt is here too https://huggingface.co/lambdalabs/image-mixer/tree/main.
-
Image Mixer CUDA Out of Memory
Any idea how to make Image Mixer work in this build? On RTX 3060 with 12Gb of memory I get the message:
- Ideas fo new feature for AI generation techniques
-
AI Image Editing from Text! Imagic Explained
References: ►Read the full article: https://www.louisbouchard.ai/imagic/ ►Kawar, B., Zada, S., Lang, O., Tov, O., Chang, H., Dekel, T., Mosseri, I. and Irani, M., 2022. Imagic: Text-Based Real Image Editing with Diffusion Models. arXiv preprint arXiv:2210.09276. ► Use it with Stable Diffusion: https://github.com/justinpinkney/stable-diffusion/blob/main/notebooks/imagic.ipynb ►My Newsletter (A new AI application explained weekly to your emails!): https://www.louisbouchard.ai/newsletter/
-
Imagic ( Google's Text-Based Image Editing ) implemented in Stable Diffusion
The notebook: https://github.com/justinpinkney/stable-diffusion/blob/main/notebooks/imagic.ipynb
-
[D] DreamBooth Stable Diffusion training in 10 GB VRAM, using xformers, 8bit adam, gradient checkpointing and caching latents.
There's a script for the SD --> Diffusers here: https://github.com/justinpinkney/stable-diffusion/blob/main/scripts/convert_sd_to_diffusers.py
-
[P] How to fine tune stable diffusion: how we made the text-to-pokemon model at Lambda
You can start with the github which contains the code: https://github.com/justinpinkney/stable-diffusion
-
Pokemon Stable Diffusion : A fine tuned model of Stable Diffusion to only create Pokemon
Hmmm, I just double checked the hashes of my local file, what's on huggingface, and what you showed above and they all match. I'm not familiar with that repo, so maybe something weird is going on. I tested it using the original txt2img script in the stable diffusion repo:
-
List of Stable Diffusion systems - Part 2
*PICK\* (Added Sep. 12, 2022) Web app Stable Diffusion Image Variations by lambdalabs. GitHub repo. Generates variations of an input image without use of a text prompt. Censored.
cog-stable-diffusion
-
Build Your Own AI Art Gallery
Stable Diffusion
-
Creating an AI photo generator and editing app with React
stable-diffusion: A latent text-to-image diffusion model capable of generating photorealistic images given any text input
-
Create AI-generated art via SMS with Replicate in Python
Next, make a Flask app so your app can receive the inbound text message to your Twilio phone number. That string is then passed to Replicate 's stable diffusion model to generate photo-realistic images given that prompt.
-
Generating Custom Blog Post Images with AI using a Serverless Azure Function
The Azure Function App executes a serverless function that: a) Runs the text summary model (Azure Cognitive Service for Language) b) Runs the image generation model (stable-diffusion on Replicate AI) c) Uploads the image to Azure Blob Storage
-
How do I get this chkpt and why would I need it?
I'm testing prompts online because it's faster than my computer: https://replicate.com/stability-ai/stable-diffusion
-
Do any integrations offer Stable Diffusion with Zapier?
There isn't a direct integration with Zapier, but you can use the Webhooks & API calls within Zapier to connect to Stable diffusion. The simplest way is to do it through Replicate: https://replicate.com/stability-ai/stable-diffusion
- Is anyone else having issues getting into Stable Diffusion 1 on the web?
- Is there a free stable diffusion website?
-
GPT3/DALL-E2 Discord bot with medium/long term memory!
I ended up using stable diffusion from replicate. They don't have a filter on the input text, they just analyse the output for NSWF and block it if it seems to be NSFW. So sometimes NSWF-ish pictures may slip through.
-
Small 1.3, 1.4, 1.5, 2.0 model comparison and the mysterious case of the undressing pirate
I'm getting same-ish results as mine with replicate. They don't have PLMS as an option and euler trips that silly porn filter on the dressed pirates for whatever reason, but it seems to closely match.
What are some alternatives?
material_stable_diffusion - Tileable Stable Diffusion - Cog model
stability-sdk - SDK for interacting with stability.ai APIs (e.g. stable diffusion inference)
stable-diffusion-webui - Stable Diffusion web UI
merge-models - Merges two latent diffusion models at a user-defined ratio
stable-diffusion - A latent text-to-image diffusion model
CrossAttentionControl - Unofficial implementation of "Prompt-to-Prompt Image Editing with Cross Attention Control" with Stable Diffusion
stable-diffusion
make-a-video-pytorch - Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
replicate-prompt-to-image-sms
authcompanion2 - An admin-friendly, User Management Server (with Passkeys & JWTs) - for seamless and secure integration of user authentication
inpainter - A web GUI built with Next.js for inpainting with Stable Diffusion using the Replicate API.