DemoFusion
ComfyUI_experiments
DemoFusion | ComfyUI_experiments | |
---|---|---|
7 | 6 | |
1,876 | 125 | |
1.9% | - | |
8.6 | 6.0 | |
28 days ago | 8 months ago | |
Jupyter Notebook | Python | |
- | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DemoFusion
- List of Stable Diffusion research softwares that I don't think gotten widespread adoption.
- DemoFusion: Democratising High-Resolution Image Generation With No 💰
- DemoFusion - a new upscaling technique
-
💰DemoFusion: High-resolution generation using only a SDXL model and a RTX 3090 GPU!
For more comparison examples, please refer to our project page: https://ruoyidu.github.io/demofusion/demofusion.html.
- [CODE RELEASE!] DemoFusion: Democratising High-Resolution Image Generation With No 💰
ComfyUI_experiments
-
List of Stable Diffusion research softwares that I don't think gotten widespread adoption.
ReferenceOnly: this has been in the auto webui ControlNet extension since forever ago, and is in comfy via a custom node published by comfy himself https://github.com/comfyanonymous/ComfyUI_experiments/blob/master/reference_only.py (it's neat but its practical value compared to more modern techniques like IPAdapter is questionable. It's cool that it works without any extra model though)
-
decent images in just 3 sampling steps
After installing the experiments from https://github.com/comfyanonymous/ComfyUI_experiments you can add the ModelSamplerToneMapNoiseTest module, this node will prevent CFG burn and allow you to use higher cfg value. I found that a multiplier of 0.4 works decent with a CFG of 6 and a multiplier of 0.12 works decent with a CFG of 12
-
New comfyui user, how can I do this? (silly question)
Reference-Only controlnet - doesn't do face-only (so the clothes, pose, environment etc have to be prompted into submission) and it often overpowers the prompt, less consistent with faces - but it can work for consistent characters. There's a ref only node in "ComfyUI experiments" which you can install through the manager.
- CFG rescale?
-
Dynamic Thresholding for comfyui?
Any Dynamic Thresholding custom nodes exist for comfyui? I believe it might be https://github.com/comfyanonymous/ComfyUI_experiments but i can't figure it out.
-
SD's noise schedule is flawed! This new paper investigates it.
I have gone and implemented the "Rescale Classifier-Free Guidance" as a ComfyUI custom node: https://github.com/comfyanonymous/ComfyUI_experiments
What are some alternatives?
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
ComfyUI_IPAdapter_plus
MotionDirector - MotionDirector: Motion Customization of Text-to-Video Diffusion Models.
Comfy-Custom-Node-How-To - An unofficial practical guide to getting started developing custom nodes for ComfyUI
sliders - Concept Sliders for Precise Control of Diffusion Models
ziplora-pytorch - Implementation of "ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs"
sd_lite - set-up Stable Diffusion with minimal dependencies and a single multi-function pipe
stable-diffusion-reference-only - img2img version of stable diffusion. Anime Character Remix. Line Art Automatic Coloring. Style Transfer.
sd-dynamic-thresholding - Dynamic Thresholding (CFG Scale Fix) for Stable Diffusion (StableSwarmUI, ComfyUI, and Auto WebUI)
Specialist-Diffusion - [CVPR 2023] Specialist Diffusion: Extremely Low-Shot Fine-Tuning of Large Diffusion Models
sd_webui_SAG