CrossAttentionControl
stable-diffusion
CrossAttentionControl | stable-diffusion | |
---|---|---|
11 | 17 | |
1,237 | 1,403 | |
- | - | |
10.0 | 2.9 | |
over 1 year ago | 4 months ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CrossAttentionControl
- "How can I do X?" for image generation.
- The Stable Horde now supports img2img as well as multiple models available at the same time. And we just added SD 1.5
- Is there any way to make Automatic1111 change an image into a different pose/style while keeping the subject of the image in tact?
- Cross Attention Control with Stable Diffusion
-
First round of results from the new Cross-Attention paper
Stable Diffusion implementation of Cross Attention Github page (Legend!): https://github.com/bloc97/CrossAttentionControl
- Prompt-to-Prompt Image Editing with Cross Attention Control
- Reproducing the method in 'Prompt-to-Prompt Image Editing with Cross Attention Control' with Stable Diffusion
- Prompt-to-Prompt Image Editing with Cross Attention Control in Stable Diffusion
stable-diffusion
-
Is it possible to merge VAEs?
Download this training project: git clone https://github.com/justinpinkney/stable-diffusion.git
-
how can i install this Image mixer onto automatic1111's webui.?
looks like it's using the https://github.com/justinpinkney/stable-diffusion/blob/4ac995b6f663b74dfe65400285e193d4167d259c/scripts/gradio_image_mixer.py to do the bulk of the work meaning the core functionality is built into stable diffusion, seems the UI just isn't built to support it. Their ckpt is here too https://huggingface.co/lambdalabs/image-mixer/tree/main.
-
Image Mixer CUDA Out of Memory
Any idea how to make Image Mixer work in this build? On RTX 3060 with 12Gb of memory I get the message:
- Ideas fo new feature for AI generation techniques
-
AI Image Editing from Text! Imagic Explained
References: ►Read the full article: https://www.louisbouchard.ai/imagic/ ►Kawar, B., Zada, S., Lang, O., Tov, O., Chang, H., Dekel, T., Mosseri, I. and Irani, M., 2022. Imagic: Text-Based Real Image Editing with Diffusion Models. arXiv preprint arXiv:2210.09276. ► Use it with Stable Diffusion: https://github.com/justinpinkney/stable-diffusion/blob/main/notebooks/imagic.ipynb ►My Newsletter (A new AI application explained weekly to your emails!): https://www.louisbouchard.ai/newsletter/
-
Imagic ( Google's Text-Based Image Editing ) implemented in Stable Diffusion
The notebook: https://github.com/justinpinkney/stable-diffusion/blob/main/notebooks/imagic.ipynb
-
[D] DreamBooth Stable Diffusion training in 10 GB VRAM, using xformers, 8bit adam, gradient checkpointing and caching latents.
There's a script for the SD --> Diffusers here: https://github.com/justinpinkney/stable-diffusion/blob/main/scripts/convert_sd_to_diffusers.py
-
[P] How to fine tune stable diffusion: how we made the text-to-pokemon model at Lambda
You can start with the github which contains the code: https://github.com/justinpinkney/stable-diffusion
-
Pokemon Stable Diffusion : A fine tuned model of Stable Diffusion to only create Pokemon
Hmmm, I just double checked the hashes of my local file, what's on huggingface, and what you showed above and they all match. I'm not familiar with that repo, so maybe something weird is going on. I tested it using the original txt2img script in the stable diffusion repo:
-
List of Stable Diffusion systems - Part 2
*PICK\* (Added Sep. 12, 2022) Web app Stable Diffusion Image Variations by lambdalabs. GitHub repo. Generates variations of an input image without use of a text prompt. Censored.
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
cog-stable-diffusion - Diffusers Stable Diffusion as a Cog model
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
material_stable_diffusion - Tileable Stable Diffusion - Cog model
Magic123 - [ICLR24] Official PyTorch Implementation of Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors
stability-sdk - SDK for interacting with stability.ai APIs (e.g. stable diffusion inference)
nataili - Nataili is a Python library that provides tools for building multimodal AI applications. With its modular design, Nataili makes it easy to use only the tools you need to build custom AI solutions.
merge-models - Merges two latent diffusion models at a user-defined ratio
anima - Turn text into video using Stable Diffusion and Google FILM
make-a-video-pytorch - Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
MultiDiffusion - Official Pytorch Implementation for "MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation" presenting "MultiDiffusion" (ICML 2023)
authcompanion2 - An admin-friendly, User Management Server (with Passkeys & JWTs) - for seamless and secure integration of user authentication