controlnet-colab
sd-webui-controlnet
controlnet-colab | sd-webui-controlnet | |
---|---|---|
9 | 247 | |
514 | 15,979 | |
- | - | |
3.3 | 9.6 | |
11 months ago | about 21 hours ago | |
Jupyter Notebook | Python | |
The Unlicense | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
controlnet-colab
-
ChatGPT's API Is So Good and Cheap, It Makes Most Text Generating AI Obsolete
Here's a few bonus OpenAI charcuterie: https://twitter.com/minimaxir/status/1633635144249774082
1. I used a ControlNet Colab from here based on SD 1.5 and the original ControlNet app: https://github.com/camenduru/controlnet-colab
2. Screenshotted a B/W OpenAI logo from their website.
3. Used the Canny adapter and the prompt: charcuterie board, professional food photography, 8k hdr, delicious and vibrant
Now that ControlNet is in diffusers, my next project will be creating an end-to-end workflow for these types of images.
-
Groove street, control net on Google collabs, details on the images
I get this link on this GitHub page: https://github.com/camenduru/controlnet-colab
- is controlnet available using google colab?
- When I say MINDBLOWING, I mean it!! New experiments. 100% SD generated. A1111.
- Is there any Colab book to try out ControlNet
-
ControlNet Colab fp16 models with Automatic 1111 GUI
Camenduru made a repository on github with all his colabs adapted for ControlNet, check it here
- ControlNet Colab now more than 65 models please try it đŁ
-
Google Colab notebook for controlling Stable Diffusion with an input image using various ControlNet models. This example used the Scribble ControlNet model with the image on the left plus the text prompt "cute puppy" to generate the image on the right. See comment for links.
This unofficial GitHub repo added a link to this Colab notebook for transferring control to other Stable Diffusion models. I haven't tried it.
sd-webui-controlnet
-
OpenPose ControlNet: A Beginner's Guide
A crucial step for achieving stable diffusion controlnet settings is the installation of the controlnet extension in Google Colab. Whether on a Windows PC or Mac, installing controlnet is vital for stable diffusion of human pose details. Additionally, updating the controlnet extension is necessary to maintain stability and achieve the desired results in OpenPose model. To install the v1.1 controlnet extension, go to the âextensionsâ tab and install it from this URL: https://github.com/Mikubill/sd-webui-controlnet. If you already have v1 controlnets installed, delete the folder from stable-diffusion-webui/extensions/. Install the v1.
-
StyleAligned node for ComfyUI
1.1.420 Image-wise ControlNet and StyleAlign
-
PATCHFUSION is really impressive. High resolution depth maps in 16bit. I've been waiting for this. https://github.com/zhyever/PatchFusion
I opened a request thread on ControlNet GitHub you can give a support : https://github.com/Mikubill/sd-webui-controlnet/issues/2319
- Going to lose my mind at this point with this problem
- Samples of style-aligned
-
Is it possible to outpaint with SD or SDXL as easy as with photoshop? (no prompts)
It has been possible for 7 months now
- Reference Only Broken (Can someone with a working Reference Only CN upload there extension folder)
-
Web app prototype to create controlnet segmentation maps for Stable Diffusion
I sometimes use a very similar technique in Cinema4d (here is a link to a c4d file with preset materials referencing proper colors for Semantic Segmentation if any other c4d user wants to try it), but yours is a much more accessible solution as it's free and it's accessible online.
-
Dalle-3 Examples
There are models available that give you more control - in some senses, at least.
For example, you can use Stable Diffusion with 'ControlNet' [1] where for example, you can input an 'openpose' to choose the pose of people in the scene.
There's also a 'Regional Prompter' [2] which lets you use different prompts for different areas of the image, giving you some control over the composition.
You can also use 'inpainting' to regenerate select parts of your image if, for example, you don't like the shape of the clouds.
Of course this stuff isn't perfect - for example, you'll get hands with the wrong number of fingers sometimes, no matter what you specify :)
[1] https://github.com/Mikubill/sd-webui-controlnet
- ControlNet SDXL for Automatic1111-WebUI official release: sd-webui-controlnet 1.1.400
What are some alternatives?
bloop - bloop is a fast code search engine written in Rust.
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
gif2gif - Automatic1111 Animated Image (input/output) Extension
openpose-editor - Openpose Editor for AUTOMATIC1111's stable-diffusion-webui
ControlNet - Let us control diffusion models!
T2I-Adapter - T2I-Adapter
unprompted - Templating language written for Stable Diffusion workflows. Available as an extension for the Automatic1111 WebUI.
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
stable-diffusion-webui-colab - stable diffusion webui colab
llama.cpp - LLM inference in C/C++
stable-diffusion-webui - Stable Diffusion web UI