openpose-editor
sd-webui-regional-prompter
openpose-editor | sd-webui-regional-prompter | |
---|---|---|
23 | 60 | |
1,592 | 1,379 | |
- | - | |
10.0 | 8.5 | |
7 months ago | 25 days ago | |
Python | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
openpose-editor
-
[ControlNet-Openpose Question] How to change the pose of shorten(chibi) animals with the ControlNet-Openpose techinque?
you can try Openpose Editor editor and make a small skeletons with a large black image and generate a pixel perfect image with the same size as the large one to give the controlnet a perspective of a small creature
-
Openpose Controlnet on anime images
You want the Openpose Editor extension.
-
[Stable Diffusion] J'ai obtenu une erreur stable-diffusion-webui \\ tmp \\ openposte-éditeur 'existe déjà et n'est pas un répertoire vide. comment le résoudre?
Le message d'erreur est: GitCommandError: cmd ('git') a échoué en raison de: Code de sortie (128) CMDLINE: GIT CLONE -V - https://github.com/fkunn1326/openpose-editor.git C: \ Users \ user \ stable-diffusion-webui \ tmp \ openpose-editor stderr: 'fatal: path de destination' c: \ utilisateurs \ user \ stable-diffusion-webui \ tmp \ openpose-editor 'déjà existe et n'est pas un répertoire vide. '
-
Olive Oyl using CN 1.1 and Regional Prompter, workflow in comments
I wanted to use OpenPose as well, however the preprocessor did not want to recognize the exaggerated cartoon. So I pulled her into OpenPose editor and traced the skeleton, putting her hand behind her head since the original was weirdly posed anyway. Exported the PNG, and brought it into the second CN slot, set to the openpose model with NO preprocessor. An ideal weight turned out to be 1.5. I chose to let the prompt be more important (old guess mode) on both CN inputs.
-
i got an error stable-diffusion-webui\tmp\openpose-editor' already exists and is not an empty directory. how to solve it?
Error message is : GitCommandError: Cmd('git') failed due to: exit code(128) cmdline: git clone -v -- https://github.com/fkunn1326/openpose-editor.git C:\Users\user\stable-diffusion-webui\tmp\openpose-editor stderr: 'fatal: destination path 'C:\Users\user\stable-diffusion-webui\tmp\openpose-editor' already exists and is not an empty directory. '
-
Controlnet seed?
The seed is only going to impact the "random" stuff. You would need to change the ControlNet input. OpenPose editor can load an image and generate a pose for ControlNet but you will need to edit it to reflection the change of perspective. There are some 3D options among the extensions as well but you would probably need to start from scratch with those. https://github.com/fkunn1326/openpose-editor
-
Auto1111 Openpose editor not working
I've installed the auto1111 openpose editor from https://github.com/fkunn1326/openpose-editor.git and the control net gui from https://github.com/Mikubill/sd-webui-controlnet.git. However, upon launch the error message above is given and the openpose editor isn't there. How do I fix this issue?
-
Are there free cloud based INVOKE AI models?
!git clone https://github.com/fkunn1326/openpose-editor /workspace/stable-diffusion-webui/extensions/openpose-editor
-
Making her "dance"?
Afterwards I put it into img2img and played around with the open-pose editor First I recreated the original pose, which went surprisingly well. Then I just had to create the poses for the hip swaying. Played around a bit with denosing strength aswell, I was mostly at 0.7-0.85. I put the poses into ControlNet one after another and this was the result. Could be made a lot smoother with more images, as I only used I think 10 different poses, but this was just a quick study for me on how well you can move a character with ControlNet. Using the correct prompts it could definetly be possible to spin smoothly spin the character around, so I might give that a shot later. But for now I'm once again off making LORAs and the occasional "normal" post. See ya
-
Openpose extension tab not visible
It works now after reinstalling openpose, i use this one: GitHub - fkunn1326/openpose-editor: Openpose Editor for AUTOMATIC1111's stable-diffusion-webui
sd-webui-regional-prompter
-
Regional Prompting doesn't seem to be working a lot of the time
So I'm using the Regional Prompter extension https://github.com/hako-mikan/sd-webui-regional-prompter
- Dalle-3 Examples
- Stable Diffussion 1.5 Newbie Question about creating an image with 2 characters
-
"In summary, Stable Diffusion doesn’t really care about commas. But you can use them to organize your prompts for your own orderliness." (Link to quote below.) So... Is there a way to make SD care? To make it "understand" which words we put together to create meaning?
But using Automatic1111, this extension can define a region of the image where the prompt should apply: https://github.com/hako-mikan/sd-webui-regional-prompter
- Train SD for CAPTION WRITING? I'm tired of uploading hairstyle pics and got "male public hair"
- How to fix issue related to generate two guys when aspect ration isn't square?
-
A little bit of party after fighting each other in Smash bros (Text2img, controlnet, regional prompter, adetailer)
Second, install regional prompter and adetailer to automatic1111 webui.Next, go to setting>adetailer and change the "sort bounding boxes" from "none" to "left and right". This means that adetailer will inpaint our subjects starting from the very left to the right, allowing for greater control of what we want.
- What are some must-have/fun extensions or modules?
-
How to control a scene?
You can use ControlNets to control composition in various ways. You can use extensions like multidiffusion upscaler and regional prompter to control the layout of a scene. You can also inpaint details into a scene with the arrangement you want.
- Is there a way to guarantee one model in the image?
What are some alternatives?
sd-webui-controlnet - WebUI extension for ControlNet
sd-webui-latent-couple - Latent Couple extension (two shot diffusion port)
open-pose-editor - online 3d openpose editor for stable diffusion and controlnet
stable-diffusion-webui-composable-lora - This extension replaces the built-in LoRA forward procedure.
sd-webui-3d-open-pose-editor - 3d openpose editor for stable diffusion and controlnet
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
ControlNet - Let us control diffusion models!
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
sd-webui-depth-lib - Depth map library for use with the Control Net extension for Automatic1111/stable-diffusion-webui
mixture-of-diffusers - Mixture of Diffusers for scene composition and high resolution image generation