Cold-Diffusion-Models
ControlNet
Cold-Diffusion-Models | ControlNet | |
---|---|---|
14 | 127 | |
933 | 27,964 | |
- | - | |
0.0 | 4.1 | |
over 1 year ago | 2 months ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Cold-Diffusion-Models
-
[Discussion] training a diffusion model with a destructive process other than gaussian noise
Sure you can. You might be interested in cold diffusion (https://arxiv.org/abs/2208.09392) which tries doing a bunch of different kinds of degradation processes besides adding gaussian noise. You can kind of choose whatever input corruption process you want and teach the model to reverse it, and it works kinda well (I think gaussian noise might be better though)
- [D]eterministic diffusion models
-
The Uncanny Failures of A.I.-Generated Hands
I wrote a response yesterday but did not post it or send it, ops.
I still don't understand the problem, if you ask model trained on a noise pattern "trees" for a forest it will still give you a random forest, that's what it was trained on, also: https://arxiv.org/abs/2208.09392, to see the diffusion process applied to processes other than Gaussian noise.
-
when will be get away from noise based diffusion
What about this research: https://arxiv.org/abs/2208.09392
- Becoming a machine learning Engineer.
- About art AIs, how noise works?
-
Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise
Found relevant code at https://github.com/arpitbansal297/Cold-Diffusion-Models + all code implementations here
-
Denoising Diffusion models from first principle in Julia
This claims to explain diffusion models from first principles, but the issue with explaining how they work is we don't know how they work.
The explanation in the original paper turns out not to be true; you can get rid of most of their assumptions and it still works: https://arxiv.org/abs/2208.09392
-
[D] Has anyone tried coding latent diffusion from scratch? or tried other conditioning information aside from image classes and text?
Check out the cold-diffusion repo, which has nice clean implementations, and also is useful in pointing out that the multi-step computation idea isn’t limited to denoising. https://github.com/arpitbansal297/Cold-Diffusion-Models
- [D] Most Popular AI Research August 2022 - Ranked By Twitter Likes
ControlNet
-
With the recent developments, It looks like AI art is finally beginning to evolve in the right direction
It`s all possible. Have a look into Automatic1111`s Web UI, ControlNet, OpenPose and, if you don`t have a dedicated GPU with at least 8GB of VRAM, or at least 16GB of RAM to use the CPU, you can also use Stable Horde to use the webUI with a peer-to-peer connection, where you`ll only use a fraction of your resources, but you`ll be able to use local AI models with all the bells and whistles that you won`t get from "state-of-the-art" paid services.
-
AI "Artists" Are Lazy, and the Ultimate Goal of AI Image Generation (hint: its sloth)
Next up is ControlNet. Controlnet, as Illyasviel--creator of controlnet--describes it, "let's us control diffusion models!." ControlNet is a neural network structure to control diffusion models by adding extra connections. [8]. There is more to that than what I described, but the big take-away is that ControlNet takes a preprocessed image that you provide (or is generated) and uses that as a way of constraining the output the sampler's noise generates, allowing you to have a bit more control of the output. ControlNet is typically used for character or scene "artwork", which previously would have been a challenge with just prompting alone (at least with this current architecture).
- Making a ControlNet inpaint for sdxl
-
[P5V6P2] Mother and Daughter (by azfumi)
For your first part of the comment, I can simply refer you to technologies like ControlNet, LoRA and prompt embedding: https://github.com/lllyasviel/ControlNet https://github.com/microsoft/LoRA
- Calling yourself an AI artist is almost exactly the same as calling yourself a cook for heating readymade meals in a microwave
-
Why is the AI not listening to my prompts?
Here you can see what every controlnet preprocessor and model do, to give you an idea of how to use
-
Can't get img2img working well
Ya, it takes awhile to really start getting comfortable with the wonkiness. If you are trying to do something specific, look for a LoRA, but in general I'd recommend you get controlnet so you can feed it a reference image. Another simple trick is to edit the image a bit in GIMP or a photo editor to get the color scheme you like and then feed it back to img2img at low denoising (0.1-0.2) to refine it. You can also add just garishly bad cartoon drawing or photoshop in assets and img2img will usually make something of them and blend them into your image, I find this easier than using img2img scribble.
- ControlNet on A1111 seems to have been broken in the new update
-
Can anyone help me install SD and ControlNet on my Mac pro M1?
If there are no errors, go to the "Extensions" tab, then "Install from URL". There, enter "https://github.com/lllyasviel/ControlNet" then click "Install".
-
According to the poll on the recent thread, /r/dalle2 community decided to keep the subreddit restricted on Reddit.
This is a good place to start reading. Given the open-source nature of SD, there are setups of various difficulty available. A1111 is the "standard" people enjoy because it's easy to plug in new stuff (ControlNet, new models, etc.), but it's not inherently easy to set up and get going. There is an installer for it, but I haven't tried it.
What are some alternatives?
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
textual_inversion
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
MinVIS
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
ddsp-singing-vocoders - Official implementation of SawSing (ISMIR'22)
sd-webui-controlnet - WebUI extension for ControlNet
civitai - A repository of models, textual inversions, and more
stable-diffusion-webui-prompt-travel - Travel between prompts in the latent space to make pseudo-animation, extension script for AUTOMATIC1111/stable-diffusion-webui.
Intrusion-Detection-System-Using-Machine-Learning - Code for IDS-ML: intrusion detection system development using machine learning algorithms (Decision tree, random forest, extra trees, XGBoost, stacking, k-means, Bayesian optimization..)
stable-diffusion-webui - Stable Diffusion web UI