Our great sponsors
-
riffusion-app
Discontinued Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion-app] (by hmartiro)
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
riffusion-inference
Discontinued Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion-inference]
-
audio-diffusion
Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Awesome, there is another project out there that does it with CPU https://github.com/marcoppasini/musika maybe mix the both, ie take initial output of musika, convert to spectrogram and feed it to riffusion to get more variation...
Other author here! This got a posted a little earlier than we intended so we didn't have our GPUs scaled up yet. Please hang on and try throughout the day!
Meanwhile, please read our about page http://riffusion.com/about
Itβs all open source and the code lives at https://github.com/hmartiro/riffusion-app
This has been our hobby project for the past few months. Seeing the incredible results of stable diffusion, we were curious if we could fine tune the model to output spectrograms and then convert to audio clips. The answer to that was a resounding yes, and we became addicted to generating music from text prompts. There are existing works for generating audio or MIDI from text, but none as simple or general as fine tuning the image-based model. Taking it a step further, we made an interactive experience for generating looping audio from text prompts in real time. To do this we built a web app where you type in prompts like a jukebox, and audio clips are generated on the fly. To make the audio loop and transition smoothly, we implemented a pipeline that does img2img conditioning combined with latent space interpolation.
Yes from https://huggingface.co/runwayml/stable-diffusion-v1-5. Our checkpoint works with automatic1111, and if you'd like to make an extension to decode to audio, it should be pretty straightforward: https://github.com/hmartiro/riffusion-inference/blob/main/ri...
It's not too hard these days with open source BPM detection and stem separation libraries: https://github.com/deezer/spleeter
I have made a basic extension for AUTOMATIC1111's Web UI to save the mp3 files:
https://github.com/enlyth/sd-webui-riffusion