Awesome-Dataset-Distillation
Cold-Diffusion-Models
Awesome-Dataset-Distillation | Cold-Diffusion-Models | |
---|---|---|
3 | 14 | |
1,176 | 933 | |
- | - | |
9.6 | 0.0 | |
3 days ago | over 1 year ago | |
HTML | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Awesome-Dataset-Distillation
-
Researchers created a Novel Framework called ‘FedD3’ for Federated Learning in Resource-Constrained Edge Environments via Decentralized Dataset Distillation
Continue Reading | Check out the paper and github link.
- [D] Most Popular AI Research Aug 2022 - Ranked Based On GitHub Stars
-
Most Popular AI Research Aug 2022 pt. 2 - Ranked Based On GitHub Stars
https://arxiv.org/abs/2208.11311 https://github.com/Guang000/Awesome-Dataset-Distillation
Cold-Diffusion-Models
-
[Discussion] training a diffusion model with a destructive process other than gaussian noise
Sure you can. You might be interested in cold diffusion (https://arxiv.org/abs/2208.09392) which tries doing a bunch of different kinds of degradation processes besides adding gaussian noise. You can kind of choose whatever input corruption process you want and teach the model to reverse it, and it works kinda well (I think gaussian noise might be better though)
- [D]eterministic diffusion models
-
The Uncanny Failures of A.I.-Generated Hands
I wrote a response yesterday but did not post it or send it, ops.
I still don't understand the problem, if you ask model trained on a noise pattern "trees" for a forest it will still give you a random forest, that's what it was trained on, also: https://arxiv.org/abs/2208.09392, to see the diffusion process applied to processes other than Gaussian noise.
-
when will be get away from noise based diffusion
What about this research: https://arxiv.org/abs/2208.09392
- Becoming a machine learning Engineer.
- About art AIs, how noise works?
-
Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise
Found relevant code at https://github.com/arpitbansal297/Cold-Diffusion-Models + all code implementations here
-
Denoising Diffusion models from first principle in Julia
This claims to explain diffusion models from first principles, but the issue with explaining how they work is we don't know how they work.
The explanation in the original paper turns out not to be true; you can get rid of most of their assumptions and it still works: https://arxiv.org/abs/2208.09392
-
[D] Has anyone tried coding latent diffusion from scratch? or tried other conditioning information aside from image classes and text?
Check out the cold-diffusion repo, which has nice clean implementations, and also is useful in pointing out that the multi-step computation idea isn’t limited to denoising. https://github.com/arpitbansal297/Cold-Diffusion-Models
- [D] Most Popular AI Research August 2022 - Ranked By Twitter Likes
What are some alternatives?
textual_inversion
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
Intrusion-Detection-System-Using-Machine-Learning - Code for IDS-ML: intrusion detection system development using machine learning algorithms (Decision tree, random forest, extra trees, XGBoost, stacking, k-means, Bayesian optimization..)
VideoX - VideoX: a collection of video cross-modal models
MinVIS
PeRFception - [NeurIPS2022] Official implementation of PeRFception: Perception using Radiance Fields.
ddsp-singing-vocoders - Official implementation of SawSing (ISMIR'22)
civitai - A repository of models, textual inversions, and more