tutorials
A collection of tutorials about training and generating with Stable Diffusion. (by BelieveDiffusion)
riffusion
Stable diffusion for real-time music generation (by riffusion)
tutorials | riffusion | |
---|---|---|
8 | 13 | |
214 | 3,235 | |
- | 1.8% | |
1.9 | 2.0 | |
about 1 year ago | about 2 months ago | |
Python | ||
- | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tutorials
Posts with mentions or reviews of tutorials.
We have used some of these posts to build our list of alternatives
and similar projects.
-
ComfyUI Question: Batching and Search/Replace in prompt like A1111 X/Y/Z script?
Having been generating very large batches for character training (per this tutorial which worked really well for me the first time), it occurs to me that the lack of interactivity of the process might make it an ideal use case for ComfyUI, and the lower overhead of ComfyUI might make it a bit quicker.
-
No Waifu but - A beginners attempt at creating a reasonably consistent character
I created a TI to produce character images for a TTRPG campaign following this great tutorial by u/BelieveDiffusion. I didn't aim at an esotheric level of consistency just enough that seeing a couple of images would show a recognizable person. This will not hold up to close inspection or side by side comparision but I didn't even try to because a) we're playing TTRPG not image coparision b) people change over time and c) it's out of my skill level.
-
Stable Diffusion, LoRA and face consistency.
See: https://github.com/BelieveDiffusion/tutorials/tree/main/consistent_character_embedding
- How can I fix: Unexpected error: processing could not begin...
-
Best training guides?
Hi guys, I'm very new to this (set up A1111 yesterday) and I'm trying to train the AI into generating pictures of myself for entertainment. I have collected a bunch of pictures and followed this guide but none of the pictures it generated match (probably because that's not the inteded use of the guide). I've googled a bunch and found guides on using LORA an Dreambooth but I've got no clue on which ones are up to date and best to follow. Would love some advice here! I have 25-ish pictures like the guide I followed suggested, if that matters for the recommendation. Thanks!
-
I’ve created 200+ SD images of a consistent character, in consistent outfits, and consistent environments - all to illustrate a story I’m writing. I don't have it all figured out yet, but here’s everything I’ve learned so far… [GUIDE]
To create a consistent character, the two primary methods are creating a LORA or a Textual Inversion. I will not go into detail for this process, but instead focus on what you can do to get the most out of an existing Textual Inversion, which is the method I use. This will also be applicable to LORAs. For a guide on creating a Textual Inversion, I recommend BelieveDiffusion’s guide for a straightforward, step-by-step process for generating a new “person” from scratch. See it on Github.
- Help with Faces
- Creating a consistent character for embedding in Stable Diffusion
riffusion
Posts with mentions or reviews of riffusion.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-12-04.
-
You know what I REALLY want? Something like img2img but for sound/music.
Why don’t you try Riffusion? https://github.com/riffusion/riffusion The general idea is convert your audio waveform into an image (spectrogram).
- Interpolation between 2 seed images
-
Just heard of "superdub", AI MUSIC creator, I am searching for LOCAL models to use on my commputer.
riffusion - uses stable diffusion to generate spectrograms as images and converts them into audio. There's an online demo you can try. Not sure how easy it is to make full length songs as each image generated is about 5 seconds of audio. The demo sort of does it.
-
Ask HN: What audio/sound-related OSS projects can I contribute to?
Stable diffusion for real-time music generation:
https://github.com/riffusion/riffusion
https://github.com/riffusion/riffusion-app
-
Choppy transition
I'm using the (Vanilla?) Riffusion + App https://github.com/riffusion/riffusion https://github.com/riffusion/riffusion-app
-
LLWCHほこり - AIwave // more AI-generated vaporwave
Riffusion: https://github.com/riffusion/riffusion
-
Downloading songs?
There's a riffusion app you can run locally.
-
[P] Potential ML models for music generation that might run on CPU or low end GPU
Well, if you've got Stable Diffusion running, you should be able to run Riffusion.
- Riffusion v0.3.0 - Stable diffusion for music and audio
- Riffusion Release v0.3 – Stable Diffusion for audio
What are some alternatives?
When comparing tutorials and riffusion you can also consider the following projects:
sdupdates - A mega collection of all resources and news related to Stable Diffusion. Focused around AUTOMATIC1111's webui (https://github.com/AUTOMATIC1111/stable-diffusion-webui)
SBEMU - legacy sound blaster emulation for DOS