git-re-basin
iris
git-re-basin | iris | |
---|---|---|
9 | 8 | |
438 | 756 | |
- | - | |
3.5 | 1.9 | |
about 1 year ago | 2 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
git-re-basin
-
Merge-Stable-Diffusion-models-without-distortion-gui
Implementation: https://github.com/samuela/git-re-basin
- I'm testing if the 1.5 and 2.0 model combine in Automatic 1111 now...
-
I love SD but the pain is real
Wouldn't "applying the permutation" simply swap all the parameters in a model so they match on both models? For example, in https://github.com/samuela/git-re-basin/blob/main/src/cifar10_vgg_weight_matching.py, on line 184 they apply the permutation, and on line 192 they lerp from model A's params to the permuted model B's params. This lerp is basically a weighted sum merge, isn't it? At a lerp of 0.5, it would be somewhere in between model A and the permuted model B.
-
Not really working, poorly coded sparse tensor compression of Dreambooth models. Help appreciated, code in comments
Definitely interesting, but you might get something useful out of https://github.com/samuela/git-re-basin ?
- Git Re-Basin: Merging models and preserving latent spaces (ie not the A111 linear interpolation)
-
Most Popular AI Research Sept 2022 - Ranked Based On Total GitHub Stars
Git Re-Basin: Merging Models modulo Permutation Symmetries https://github.com/samuela/git-re-basin https://arxiv.org/abs/2209.04836v1
- [D] Most Popular AI Research Sept 2022 - Ranked Based On GitHub Stars
- Git Re-Basin: Merging Models Modulo Permutation Symmetries
iris
-
From Deep to Long Learning
Yea, after all these LLMs are predicting one sequence of tokens from another sequence of tokens and the tokens could be anything, it just "happens" that text has the most knowledge and the easiest to input, then there are image, sound, video, but tokens could also be learned from world experience in RL:
Transformers are Sample-Efficient World Models:
https://github.com/eloialonso/iris#transformers-are-sample-e...
- What is the next booming topic in Deep RL?
-
Most Popular AI Research Sept 2022 - Ranked Based On Total GitHub Stars
Transformers are Sample Efficient World Models https://github.com/eloialonso/iris https://arxiv.org/abs/2209.00588v1
- [D] Most Popular AI Research Sept 2022 - Ranked Based On GitHub Stars
-
Minimal PyTorch re-implementation of GPT
This is actually a pretty neat, self-contained implementation that can super easily extended beyond stereotypical natural language models, for example to create world models for video games [1] or to create robot models that can learn to imitate from large, chaotic human demonstration data [2] (disclaimer, I'm an author on the second one.) Basically, GPT (or minGPT) models are EXCELLENT sequence modelers, almost to the point where you can throw any sensible sequence data at it and hope to get interesting results, as long as you don't overfit.
Even though I have only been working on machine learning for around six years, it's crazy to see how the landscape has changed so fast so recently, including diffusion models and transformers. It's not too much to say that we might expect more major breakthroughs by the end of this decade, and end in a place we can't even imagine right now!
[1] https://github.com/eloialonso/iris
- Transformers are Sample Efficient World Models
- [R] Transformers are Sample Efficient World Models: With the equivalent of only two hours of gameplay in the Atari 100k benchmark, IRIS outperforms humans on 10 out of 26 games and surpasses MuZero.
What are some alternatives?
VToonify - [SIGGRAPH Asia 2022] VToonify: Controllable High-Resolution Portrait Video Style Transfer
setfit - Efficient few-shot learning with Sentence Transformers
artbot-for-stable-diffusion - A front-end GUI for interacting with the AI Horde / Stable Diffusion distributed cluster
Text2Light - [SIGGRAPH Asia 2022] Text2Light: Zero-Shot Text-Driven HDR Panorama Generation
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
block-recurrent-transformer-pytorch - Implementation of Block Recurrent Transformer - Pytorch
machine-learning-articles - 🧠💬 Articles I wrote about machine learning, archived from MachineCurve.com.
motion-diffusion-model - The official PyTorch implementation of the paper "Human Motion Diffusion Model"
hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
CSL - [COLING 2022] CSL: A Large-scale Chinese Scientific Literature Dataset 中文科学文献数据集