course22p2
latentblending
course22p2 | latentblending | |
---|---|---|
6 | 17 | |
431 | 317 | |
2.6% | 1.0% | |
0.0 | 8.7 | |
11 days ago | about 1 month ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
course22p2
-
Ask HN: Daily practices for building AI/ML skills?
Practical Deep Learning for Coders: https://course.fast.ai/Lessons/part2.html
- Stanford A.I. Courses
-
A quick visual guide to what's actually happening when you generate an image with Stable Diffusion
To me the most important bit is that the diffusion loop turns a noisy latent into an image, does that iteratively, and uses "guidance" in the form of a prompt/controlnet image/etc to do it. The scheduler part, I felt, was needlessly complex for this short explainer, so I hand-wave it away. IF someone wants to dive in deeper, much deeper, they can go through the same thing I'm doing, which is this: https://course.fast.ai/Lessons/part2.html
- Practical Deep Learning for Coders - Part 2 overview
-
Courses for an AI beginner
They also recently released a course for more experienced students where they teach you to implement the Stable Diffusion algorithm from scratch.
-
From Deep Learning Foundations to Stable Diffusion (Part 2)
The full transcripts are available here in plain text form:
https://github.com/fastai/course22p2/tree/master/summaries
latentblending
- Introducing Steerable Motion v. 1.0, a ComfyUI custom node for steering videos using batches of images
- Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models
-
A quick visual guide to what's actually happening when you generate an image with Stable Diffusion
So it ought to be pretty easy to implement latent blending on source and target images which were not generated by SD?
-
Next frame prediction with ControlNet
latent blending: https://github.com/lunarring/latentblending This one has a lot of potential, since it can be used to transition from one prompt to the next. It interpolates between the prompst and also the image latents themselves.
-
Bridge Remodeling Takes a Trippy Turn
you can stay tuned also on twitter https://twitter.com/j_stelzer or github https://github.com/lunarring/latentblending/
-
Mindfuck animation
yip, long interpolation using experimental version of latent blending (https://github.com/lunarring/latentblending/). you will be able to do this yourself very soon with a nice gradio UI.
-
Music video I made using Stable Diffusion + the "latent blending" technique
Github: https://github.com/lunarring/latentblending/
- So is SD ever going to get a 'blend' function like midjourney pulled off, or is mixing images like that never going to happen...
-
Fine-tuned a multi-subject model using EveryDream then created some videos morphing between the subjects using latent blending
Generated some images using that model and picked ones that had similar composition to create the videos using latent blending: https://github.com/lunarring/latentblending/
-
Auto1111 Fork with pix2pix
Latent blending is a amazing, and produces effects quite different to the available extensions. https://github.com/lunarring/latentblending
What are some alternatives?
developer - the first library to let you embed a developer agent in your own app!
stable-diffusion-webui-pix2pix - Stable Diffusion web UI
simpleaichat - Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
StableTuner - Finetuning SD in style.
playground - Play with neural networks!
instruct-pix2pix
machine-learning-specialization-andrew-ng - A collection of notes and implementations of machine learning algorithms from Andrew Ng's machine learning specialization.
Next_Frame_Prediction - Predicts the next frame of a series of gifs.
stylegan2-projecting-images - Projecting images to latent space with StyleGAN2.
ControlNet - Let us control diffusion models!
StableDiffusion-By-Parts - Slice and dice the Stable Diffusion pipeline, saving to a TIFF file in between sections.
Steerable-Motion - A ComfyUI node for driving videos using batches of images.