git-re-basin
Merge-Stable-Diffusion-models-without-distortion
git-re-basin | Merge-Stable-Diffusion-models-without-distortion | |
---|---|---|
9 | 6 | |
438 | 135 | |
- | - | |
3.5 | 7.1 | |
about 1 year ago | 4 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
git-re-basin
-
Merge-Stable-Diffusion-models-without-distortion-gui
Implementation: https://github.com/samuela/git-re-basin
- I'm testing if the 1.5 and 2.0 model combine in Automatic 1111 now...
-
I love SD but the pain is real
Wouldn't "applying the permutation" simply swap all the parameters in a model so they match on both models? For example, in https://github.com/samuela/git-re-basin/blob/main/src/cifar10_vgg_weight_matching.py, on line 184 they apply the permutation, and on line 192 they lerp from model A's params to the permuted model B's params. This lerp is basically a weighted sum merge, isn't it? At a lerp of 0.5, it would be somewhere in between model A and the permuted model B.
-
Not really working, poorly coded sparse tensor compression of Dreambooth models. Help appreciated, code in comments
Definitely interesting, but you might get something useful out of https://github.com/samuela/git-re-basin ?
- Git Re-Basin: Merging models and preserving latent spaces (ie not the A111 linear interpolation)
-
Most Popular AI Research Sept 2022 - Ranked Based On Total GitHub Stars
Git Re-Basin: Merging Models modulo Permutation Symmetries https://github.com/samuela/git-re-basin https://arxiv.org/abs/2209.04836v1
- [D] Most Popular AI Research Sept 2022 - Ranked Based On GitHub Stars
- Git Re-Basin: Merging Models Modulo Permutation Symmetries
Merge-Stable-Diffusion-models-without-distortion
-
Merging multiple models
https://github.com/ogkalu2/Merge-Stable-Diffusion-models-without-distortion this seems to be solution. But I'm just too tired to try it now. A link, a bookmark, a possible solution, that's already something ;) For later.
- Merge-Stable-Diffusion-models-without-distortion-gui
-
You can now merge in-painting and regular models using Automatic WebUi
Have you tried out https://github.com/ogkalu2/Merge-Stable-Diffusion-models-without-distortion yet?
- XY Plot Comparisons of SD v1.5 ema VS SD 2.0 x768 ema models
- I love SD but the pain is real
What are some alternatives?
VToonify - [SIGGRAPH Asia 2022] VToonify: Controllable High-Resolution Portrait Video Style Transfer
Merge-Stable-Diffusion-models-without-distortion-gui - gui for Merge-Stable-Diffusion-models-without-distortion-gui
artbot-for-stable-diffusion - A front-end GUI for interacting with the AI Horde / Stable Diffusion distributed cluster
stable-diffusion-webui - Stable Diffusion web UI
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
depthmap2mask - Create masks out of depthmaps in img2img
setfit - Efficient few-shot learning with Sentence Transformers
civitai - A repository of models, textual inversions, and more
Text2Light - [SIGGRAPH Asia 2022] Text2Light: Zero-Shot Text-Driven HDR Panorama Generation
stable-diffusion-webui-colab - stable diffusion webui colab
hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.