riffusion-app
musika
Our great sponsors
riffusion-app | musika | |
---|---|---|
3 | 3 | |
1,714 | 665 | |
- | - | |
10.0 | 0.0 | |
over 1 year ago | over 1 year ago | |
TypeScript | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
riffusion-app
-
Stable Diffusion fine-tuned to generate Music β Riffusion
git clone https://github.com/hmartiro/riffusion-appThis is a separate, required project necessary to do anything with the riffusion-inference server
-
Riffusion β Stable Diffusion fine-tuned to generate Music
Other author here! This got a posted a little earlier than we intended so we didn't have our GPUs scaled up yet. Please hang on and try throughout the day!
Meanwhile, please read our about page http://riffusion.com/about
Itβs all open source and the code lives at https://github.com/hmartiro/riffusion-app
This has been our hobby project for the past few months. Seeing the incredible results of stable diffusion, we were curious if we could fine tune the model to output spectrograms and then convert to audio clips. The answer to that was a resounding yes, and we became addicted to generating music from text prompts. There are existing works for generating audio or MIDI from text, but none as simple or general as fine tuning the image-based model. Taking it a step further, we made an interactive experience for generating looping audio from text prompts in real time. To do this we built a web app where you type in prompts like a jukebox, and audio clips are generated on the fly. To make the audio loop and transition smoothly, we implemented a pipeline that does img2img conditioning combined with latent space interpolation.
musika
-
Riffusion β Stable Diffusion fine-tuned to generate Music
Awesome, there is another project out there that does it with CPU https://github.com/marcoppasini/musika maybe mix the both, ie take initial output of musika, convert to spectrogram and feed it to riffusion to get more variation...
-
[D] Why are there no good generative music AIs?
Try Musika: https://github.com/marcoppasini/musika
- Musika Fast Infinite Waveform Music Generation
What are some alternatives?
riffusion-inference - Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion-inference]
bumblebee - Pre-trained Neural Network models in Axon (+ π€ Models integration)
spleeter - Deezer source separation library including pretrained models.
sd-webui-riffusion - Riffusion extension for AUTOMATIC1111's SD Web UI
riffusion-app - Stable diffusion for real-time music generation (web app)
audio-diffusion - Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.
cowbell-lol - cowbell.lol