sd-webui-riffusion
riffusion
sd-webui-riffusion | riffusion | |
---|---|---|
6 | 13 | |
189 | 3,191 | |
- | 2.2% | |
4.4 | 2.0 | |
11 months ago | 19 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sd-webui-riffusion
-
Stable Diffusion Dreaming of Itself
Automatic 1111’s SD Webui RIFFUSION: https://github.com/enlyth/sd-webui-riffusion
- Riffusion tuning with your songs
-
Riffusion v0.3.0
Does anyone know how to update the A1111 Riffusion extension at https://github.com/enlyth/sd-webui-riffusion to incorporate the changes from v0.3.0?
-
How to load model to Automatic1111?
enlyth/sd-webui-riffusion: Riffusion extension for AUTOMATIC1111's SD Web UI (github.com)
- Stable Diffusion fine-tuned to generate Music — Riffusion
-
Riffusion – Stable Diffusion fine-tuned to generate Music
I have made a basic extension for AUTOMATIC1111's Web UI to save the mp3 files:
https://github.com/enlyth/sd-webui-riffusion
riffusion
-
You know what I REALLY want? Something like img2img but for sound/music.
Why don’t you try Riffusion? https://github.com/riffusion/riffusion The general idea is convert your audio waveform into an image (spectrogram).
- Interpolation between 2 seed images
-
Just heard of "superdub", AI MUSIC creator, I am searching for LOCAL models to use on my commputer.
riffusion - uses stable diffusion to generate spectrograms as images and converts them into audio. There's an online demo you can try. Not sure how easy it is to make full length songs as each image generated is about 5 seconds of audio. The demo sort of does it.
-
Ask HN: What audio/sound-related OSS projects can I contribute to?
Stable diffusion for real-time music generation:
https://github.com/riffusion/riffusion
https://github.com/riffusion/riffusion-app
-
Choppy transition
I'm using the (Vanilla?) Riffusion + App https://github.com/riffusion/riffusion https://github.com/riffusion/riffusion-app
-
LLWCHほこり - AIwave // more AI-generated vaporwave
Riffusion: https://github.com/riffusion/riffusion
-
Downloading songs?
There's a riffusion app you can run locally.
-
[P] Potential ML models for music generation that might run on CPU or low end GPU
Well, if you've got Stable Diffusion running, you should be able to run Riffusion.
- Riffusion v0.3.0 - Stable diffusion for music and audio
- Riffusion Release v0.3 – Stable Diffusion for audio
What are some alternatives?
riffusion-inference - Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion-inference]
SBEMU - legacy sound blaster emulation for DOS
spleeter - Deezer source separation library including pretrained models.
bumblebee - Pre-trained Neural Network models in Axon (+ 🤗 Models integration)
riffusion-inference - Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion]
stable-diffusion-webui - Stable Diffusion web UI
DSP.jl - Filter design, periodograms, window functions, and other digital signal processing functionality
audio-diffusion - Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.
StableFusion - Transform text into images and images into new ones using AI. Our user-friendly web app, built with Diffusion, Python, and Streamlit, offers customizable outputs in various styles and formats
riffusion-app - Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion-app]
Dplug - Audio plugin framework. VST2/VST3/AU/AAX/LV2 for Linux/macOS/Windows.