DemoFusion
sliders
DemoFusion | sliders | |
---|---|---|
7 | 3 | |
1,876 | 735 | |
1.9% | - | |
8.6 | 8.3 | |
28 days ago | 27 days ago | |
Jupyter Notebook | Jupyter Notebook | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DemoFusion
- List of Stable Diffusion research softwares that I don't think gotten widespread adoption.
- DemoFusion: Democratising High-Resolution Image Generation With No đź’°
- DemoFusion - a new upscaling technique
-
đź’°DemoFusion: High-resolution generation using only a SDXL model and a RTX 3090 GPU!
For more comparison examples, please refer to our project page: https://ruoyidu.github.io/demofusion/demofusion.html.
- [CODE RELEASE!] DemoFusion: Democratising High-Resolution Image Generation With No đź’°
sliders
-
Are we at peak vector database?
> Always felt they're more like hashes/fingerprints for the RAG use cases.
Yes, I see where you’re coming from. Perceptual hashes[0] are pretty similar, the key is that similar documents should have similar embedding (unlike cryptographic hashes, where a single bit flip should produce a completely different hash).
Nice embeddings encode information spatially, a classic example of embedding arithmetic is: king - man + woman = queen[1]. “Concept Sliders” is a cool application of this to image generation [2].
Personally I’ve not had _too_ much trouble with running out of RAM due to embeddings themselves, but I did spend a fair amount of time last week profiling memory usage to make sure I didn’t run out in prod, so it is on my mind!
[0] https://en.m.wikipedia.org/wiki/Perceptual_hashing
[1] https://www.technologyreview.com/2015/09/17/166211/king-man-...
[2] https://github.com/rohitgandikota/sliders
- LoRA Adaptors for Precise Control in Diffusion Models
- List of Stable Diffusion research softwares that I don't think gotten widespread adoption.
What are some alternatives?
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
stable-diffusion-reference-only - img2img version of stable diffusion. Anime Character Remix. Line Art Automatic Coloring. Style Transfer.
ComfyUI_experiments - Some experimental custom nodes.
ziplora-pytorch - Implementation of "ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs"
MotionDirector - MotionDirector: Motion Customization of Text-to-Video Diffusion Models.
SEED - Official implementation of SEED-LLaMA (ICLR 2024).
sd_lite - set-up Stable Diffusion with minimal dependencies and a single multi-function pipe
RIVAL - [NeurIPS 2023 Spotlight] Real-World Image Variation by Aligning Diffusion Inversion Chain
LAMP - Official implement code of LAMP: Learn a Motion Pattern by Few-Shot Tuning a Text-to-Image Diffusion Model (Few-shot-based text-to-video diffusion)
Specialist-Diffusion - [CVPR 2023] Specialist Diffusion: Extremely Low-Shot Fine-Tuning of Large Diffusion Models