sd-webui-riffusion
riffusion-inference
sd-webui-riffusion | riffusion-inference | |
---|---|---|
6 | 6 | |
189 | 714 | |
- | - | |
4.4 | 10.0 | |
11 months ago | over 1 year ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sd-webui-riffusion
-
Stable Diffusion Dreaming of Itself
Automatic 1111’s SD Webui RIFFUSION: https://github.com/enlyth/sd-webui-riffusion
- Riffusion tuning with your songs
-
Riffusion v0.3.0
Does anyone know how to update the A1111 Riffusion extension at https://github.com/enlyth/sd-webui-riffusion to incorporate the changes from v0.3.0?
-
How to load model to Automatic1111?
enlyth/sd-webui-riffusion: Riffusion extension for AUTOMATIC1111's SD Web UI (github.com)
- Stable Diffusion fine-tuned to generate Music — Riffusion
-
Riffusion – Stable Diffusion fine-tuned to generate Music
I have made a basic extension for AUTOMATIC1111's Web UI to save the mp3 files:
https://github.com/enlyth/sd-webui-riffusion
riffusion-inference
- Looping/Interpolation
-
Riffusion Manipulation Tools
Just put the generated image into the seeds folder on the inference server, replace it with one of them on the seed_images folder: https://github.com/hmartiro/riffusion-inference/tree/main/seed_images
-
Am I understanding this right?
I think so. This is the specific code they say they use to do the image>audio conversion
-
Stable Diffusion fine-tuned to generate Music — Riffusion
I'm still reading, but it looks like they're doing some extra pre and post processing: https://github.com/hmartiro/riffusion-inference
-
Riffusion – Stable Diffusion fine-tuned to generate Music
Yes from https://huggingface.co/runwayml/stable-diffusion-v1-5. Our checkpoint works with automatic1111, and if you'd like to make an extension to decode to audio, it should be pretty straightforward: https://github.com/hmartiro/riffusion-inference/blob/main/ri...
What are some alternatives?
spleeter - Deezer source separation library including pretrained models.
riffusion-app - Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion-app]
bumblebee - Pre-trained Neural Network models in Axon (+ 🤗 Models integration)
riffusion - Stable diffusion for real-time music generation
StableDiffusionTelegram - StableDiffusionTelegram is a telegram bot that allows to generate images using the stable diffusion AI from a telegram bot, in a much more comfortable and simple way.
stable-diffusion-webui - Stable Diffusion web UI
audio-diffusion - Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.
collage-diffusion-ui - An open source, layer-based web interface for Collage Diffusion - use a familiar Photoshop-like interface and let the AI harmonize the details.