openpose-editor
MultiDiffusion
openpose-editor | MultiDiffusion | |
---|---|---|
23 | 13 | |
1,592 | 903 | |
- | - | |
10.0 | 4.8 | |
7 months ago | 7 months ago | |
Python | Jupyter Notebook | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
openpose-editor
-
[ControlNet-Openpose Question] How to change the pose of shorten(chibi) animals with the ControlNet-Openpose techinque?
you can try Openpose Editor editor and make a small skeletons with a large black image and generate a pixel perfect image with the same size as the large one to give the controlnet a perspective of a small creature
-
Openpose Controlnet on anime images
You want the Openpose Editor extension.
-
[Stable Diffusion] J'ai obtenu une erreur stable-diffusion-webui \\ tmp \\ openposte-éditeur 'existe déjà et n'est pas un répertoire vide. comment le résoudre?
Le message d'erreur est: GitCommandError: cmd ('git') a échoué en raison de: Code de sortie (128) CMDLINE: GIT CLONE -V - https://github.com/fkunn1326/openpose-editor.git C: \ Users \ user \ stable-diffusion-webui \ tmp \ openpose-editor stderr: 'fatal: path de destination' c: \ utilisateurs \ user \ stable-diffusion-webui \ tmp \ openpose-editor 'déjà existe et n'est pas un répertoire vide. '
-
Olive Oyl using CN 1.1 and Regional Prompter, workflow in comments
I wanted to use OpenPose as well, however the preprocessor did not want to recognize the exaggerated cartoon. So I pulled her into OpenPose editor and traced the skeleton, putting her hand behind her head since the original was weirdly posed anyway. Exported the PNG, and brought it into the second CN slot, set to the openpose model with NO preprocessor. An ideal weight turned out to be 1.5. I chose to let the prompt be more important (old guess mode) on both CN inputs.
-
i got an error stable-diffusion-webui\tmp\openpose-editor' already exists and is not an empty directory. how to solve it?
Error message is : GitCommandError: Cmd('git') failed due to: exit code(128) cmdline: git clone -v -- https://github.com/fkunn1326/openpose-editor.git C:\Users\user\stable-diffusion-webui\tmp\openpose-editor stderr: 'fatal: destination path 'C:\Users\user\stable-diffusion-webui\tmp\openpose-editor' already exists and is not an empty directory. '
-
Controlnet seed?
The seed is only going to impact the "random" stuff. You would need to change the ControlNet input. OpenPose editor can load an image and generate a pose for ControlNet but you will need to edit it to reflection the change of perspective. There are some 3D options among the extensions as well but you would probably need to start from scratch with those. https://github.com/fkunn1326/openpose-editor
-
Auto1111 Openpose editor not working
I've installed the auto1111 openpose editor from https://github.com/fkunn1326/openpose-editor.git and the control net gui from https://github.com/Mikubill/sd-webui-controlnet.git. However, upon launch the error message above is given and the openpose editor isn't there. How do I fix this issue?
-
Are there free cloud based INVOKE AI models?
!git clone https://github.com/fkunn1326/openpose-editor /workspace/stable-diffusion-webui/extensions/openpose-editor
-
Making her "dance"?
Afterwards I put it into img2img and played around with the open-pose editor First I recreated the original pose, which went surprisingly well. Then I just had to create the poses for the hip swaying. Played around a bit with denosing strength aswell, I was mostly at 0.7-0.85. I put the poses into ControlNet one after another and this was the result. Could be made a lot smoother with more images, as I only used I think 10 different poses, but this was just a quick study for me on how well you can move a character with ControlNet. Using the correct prompts it could definetly be possible to spin smoothly spin the character around, so I might give that a shot later. But for now I'm once again off making LORAs and the occasional "normal" post. See ya
-
Openpose extension tab not visible
It works now after reinstalling openpose, i use this one: GitHub - fkunn1326/openpose-editor: Openpose Editor for AUTOMATIC1111's stable-diffusion-webui
MultiDiffusion
-
Opendream: A Non-Destructive UI for Stable Diffusion
For composing this approach works pretty well
https://multidiffusion.github.io/
-
Messing with the denoising loop can allow you to reach new places in latent space. Over 8+ different research papers/Auto1111 extension ideas in a single pipe. Load once and do lots of different things (SD 2.1 or 1.5)
So I've continued to experiment with how many papers I can fit into a single pipe and have them play nicely together. The images below were created by combining the panorama code from omerbt/MultiDiffusion with the ideas from albarji/mixture-of-diffusers. Also turns out nateraw/stable-diffusion-videos can be seen as a special case of a panorama (in latent space rather than prompt space).
- MultiDiffusion Region Control, a prompt on each mask webui extension is out.
-
Hubble Diffusion with MultiDiffusion
Essentially, I fine-tuned Stable Diffusion 2.1 base (the 512x512) model on the ESA Hubble Deep Space Images & Captions dataset I collected from public Hubble images & captions. After around 33,000 training steps, I saved the model and was really impressed by the results. But I really wanted to be able to generate wallpaper-level quality space images, so I stumbled upon MultiDiffusion: a new project for generating massive panorama images using stable diffusion models. I then used hubble-diffusion-2 along with MultiDiffusion to generate each one of these amazing 2560x1536 images. Each image took a little over an hour to generate on a Google Colab T4 GPU. I used the following prompts for each of these images:
- MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation
-
What is the maximum size a 3090 24gb can produce?
If you need generated and not upscaled 4k for some reason, try something like https://github.com/omerbt/MultiDiffusion
-
[R] [N] "MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation" enables controllable image generation without any further training or finetuning of diffusion models.
Project: https://multidiffusion.github.io/ Paper: https://arxiv.org/abs/2302.08113 GitHub: https://github.com/omerbt/MultiDiffusion
-
Meet MultiDiffusion: A Unified AI Framework That Enables Versatile And Controllable Image Generation Using A Pre-Trained Text-to-Image Diffusion Model
Quick Read: https://www.marktechpost.com/2023/02/24/meet-multidiffusion-a-unified-ai-framework-that-enables-versatile-and-controllable-image-generation-using-a-pre-trained-text-to-image-diffusion-model/ Paper: https://arxiv.org/abs/2302.08113 Github: https://github.com/omerbt/MultiDiffusion Project: https://multidiffusion.github.io/
-
You to can create Panorama images 512x10240+ (not a typo) using less then 6GB VRAM (Vertorama works too). A modification of the MultiDiffusion code to pass the image through the VAE in slices then reassemble. Potato computers of the world rejoice.
So I haven't made many images with Stable Diffusion despite using it heavily. The reason is I've been messing with the internals of the diffusion pipe, to interfere with the diffusion process in different ways. Todays fun result is based on omerbt/MultiDiffusion for making panoramas.
-
First version of Stable Diffusion was released on August 22, 2022
If we combine Mixture of Diffusers + MultiDiffusion+ Composer+ cross-domain-compositing and probably some more I'm not thinking of.
What are some alternatives?
sd-webui-controlnet - WebUI extension for ControlNet
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
open-pose-editor - online 3d openpose editor for stable diffusion and controlnet
sd-webui-3d-open-pose-editor - 3d openpose editor for stable diffusion and controlnet
mixture-of-diffusers - Mixture of Diffusers for scene composition and high resolution image generation
stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI
Diffusion-Models-Papers-Survey-Taxonomy - Diffusion model papers, survey, and taxonomy
ControlNet - Let us control diffusion models!
stable-diffusion-videos - Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
sd-webui-regional-prompter - set prompt to divided region
stable-diffusion-webui-sonar - Wrapped k-diffuison samplers with tricks to improve the generated image quality (maybe?), extension script for AUTOMATIC1111/stable-diffusion-webui