stable-diffusion-reference-only
ComfyUI_experiments
stable-diffusion-reference-only | ComfyUI_experiments | |
---|---|---|
3 | 6 | |
114 | 125 | |
- | - | |
9.1 | 6.0 | |
about 2 months ago | 8 months ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-reference-only
- List of Stable Diffusion research softwares that I don't think gotten widespread adoption.
-
Stable Diffusion Reference Only: : Image Prompt and Blueprint Jointly Guided Multi-Condition Diffusion Model for Secondary Painting
Code: https://github.com/aihao2000/stable-diffusion-reference-only
ComfyUI_experiments
-
List of Stable Diffusion research softwares that I don't think gotten widespread adoption.
ReferenceOnly: this has been in the auto webui ControlNet extension since forever ago, and is in comfy via a custom node published by comfy himself https://github.com/comfyanonymous/ComfyUI_experiments/blob/master/reference_only.py (it's neat but its practical value compared to more modern techniques like IPAdapter is questionable. It's cool that it works without any extra model though)
-
decent images in just 3 sampling steps
After installing the experiments from https://github.com/comfyanonymous/ComfyUI_experiments you can add the ModelSamplerToneMapNoiseTest module, this node will prevent CFG burn and allow you to use higher cfg value. I found that a multiplier of 0.4 works decent with a CFG of 6 and a multiplier of 0.12 works decent with a CFG of 12
-
New comfyui user, how can I do this? (silly question)
Reference-Only controlnet - doesn't do face-only (so the clothes, pose, environment etc have to be prompted into submission) and it often overpowers the prompt, less consistent with faces - but it can work for consistent characters. There's a ref only node in "ComfyUI experiments" which you can install through the manager.
- CFG rescale?
-
Dynamic Thresholding for comfyui?
Any Dynamic Thresholding custom nodes exist for comfyui? I believe it might be https://github.com/comfyanonymous/ComfyUI_experiments but i can't figure it out.
-
SD's noise schedule is flawed! This new paper investigates it.
I have gone and implemented the "Rescale Classifier-Free Guidance" as a ComfyUI custom node: https://github.com/comfyanonymous/ComfyUI_experiments
What are some alternatives?
deep-learning-v2-pytorch - Projects and exercises for the latest Deep Learning ND program https://www.udacity.com/course/deep-learning-nanodegree--nd101
ComfyUI_IPAdapter_plus
sliders - Concept Sliders for Precise Control of Diffusion Models
Comfy-Custom-Node-How-To - An unofficial practical guide to getting started developing custom nodes for ComfyUI
ziplora-pytorch - Implementation of "ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs"
DemoFusion - Let us democratise high-resolution generation! (CVPR 2024)
RIVAL - [NeurIPS 2023 Spotlight] Real-World Image Variation by Aligning Diffusion Inversion Chain
DCT-Net - Official implementation of "DCT-Net: Domain-Calibrated Translation for Portrait Stylization", SIGGRAPH 2022 (TOG); Multi-style cartoonization
sd-dynamic-thresholding - Dynamic Thresholding (CFG Scale Fix) for Stable Diffusion (StableSwarmUI, ComfyUI, and Auto WebUI)
animegan2-pytorch - PyTorch implementation of AnimeGANv2
sd_webui_SAG