riffusion-app
spleeter
Our great sponsors
riffusion-app | spleeter | |
---|---|---|
3 | 230 | |
1,714 | 24,878 | |
- | 1.4% | |
10.0 | 1.5 | |
over 1 year ago | about 1 month ago | |
TypeScript | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
riffusion-app
-
Stable Diffusion fine-tuned to generate Music — Riffusion
git clone https://github.com/hmartiro/riffusion-appThis is a separate, required project necessary to do anything with the riffusion-inference server
-
Riffusion – Stable Diffusion fine-tuned to generate Music
Other author here! This got a posted a little earlier than we intended so we didn't have our GPUs scaled up yet. Please hang on and try throughout the day!
Meanwhile, please read our about page http://riffusion.com/about
It’s all open source and the code lives at https://github.com/hmartiro/riffusion-app
This has been our hobby project for the past few months. Seeing the incredible results of stable diffusion, we were curious if we could fine tune the model to output spectrograms and then convert to audio clips. The answer to that was a resounding yes, and we became addicted to generating music from text prompts. There are existing works for generating audio or MIDI from text, but none as simple or general as fine tuning the image-based model. Taking it a step further, we made an interactive experience for generating looping audio from text prompts in real time. To do this we built a web app where you type in prompts like a jukebox, and audio clips are generated on the fly. To make the audio loop and transition smoothly, we implemented a pipeline that does img2img conditioning combined with latent space interpolation.
spleeter
-
Are stems a good way of making mashups
virtual dj and others stem separator is shrinked model of this https://github.com/deezer/spleeter you will get better results downloading original + their large model.
-
Big News!
I have used multiple tools at this point. It depends on the scene. I use https://ultimatevocalremover.com/, https://github.com/deezer/spleeter/, iZotope RX. There are also multiple options online, I would personally recommend https://vocalremover.org/.
- Anybody here know what AI model does Steinberg's Spectralayers use to do stem separation?
-
Show HN: Free AI-based music demixing in the browser
I tried to use it but I had some issues as others in the thread.
I have tried many sources and method over the years and settled on spleeter [0]. Works well even for 10+ minute songs, varying styles from flamenco to heavy metal.
[0] https://github.com/deezer/spleeter
-
AI tools list sorted by category in one place
Spleeter is pretty good https://github.com/deezer/spleeter. Apparently it is used in some dj applications
- Software to lower tracks?
-
Where does one legally get stems for remixes?
Haha GitHub and command lines and all can be confusing, but it’s certainly worth the effort because it lets you do everything for free.. here’s the online tutorial: https://github.com/deezer/spleeter/wiki/1.-Installation
- Audio and python help
-
Are there any websites or programs that can separate vocals and drums from samples?
Chopped from their website Simple Stems is a quick and easy way to decompose any audio into it’s constituent parts. The plugin uses the well established Spleeter algorithm by Deezer to deconstruct songs into 2, 4 or 5 stems. The results are stunning, though more complicated mixes and live recordings are not always perfectly decomposed.
-
Ask HN: Is there an ML model that can go from an audio song to sheet music?
I was going to post basic pitch from Spotify but it looks like billconan beat me to it. That said I can give you a bit more advice. The Spotify basic pitch model isn't too good at multi-track input. It's capable of it, but you may actually get better results if you separate out the tracks first and then run them individually through the basic pitch model.
In order to do this you can use a source/stem separation model like spleeter (https://github.com/deezer/spleeter) and then run the basic pitch model (or any other midi transcription model). There's other you can try which may yield better results, for example: (https://github.com/Music-and-Culture-Technology-Lab/omnizart)
Either way the key words you want to be looking for are "midi transcription" and "stem separation", should help you find more models to try for both steps. Good luck! :)
What are some alternatives?
riffusion-inference - Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion-inference]
ultimatevocalremovergui - GUI for a Vocal Remover that uses Deep Neural Networks.
bumblebee - Pre-trained Neural Network models in Axon (+ 🤗 Models integration)
open-unmix-pytorch - Open-Unmix - Music Source Separation for PyTorch
riffusion-app - Stable diffusion for real-time music generation (web app)
demucs - Code for the paper Hybrid Spectrogram and Waveform Source Separation, but the goddamm motherfucker doesn't work.
cowbell-lol - cowbell.lol
SpleeterGui - Windows desktop front end for Spleeter - AI source separation
audio-diffusion - Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.
SpleetGUI - Spleeter GUI version
sd-webui-riffusion - Riffusion extension for AUTOMATIC1111's SD Web UI
spleeter-web - Self-hostable web app for isolating the vocal, accompaniment, bass, and drums of any song. Supports Spleeter, D3Net, Demucs, Tasnet, X-UMX. Built with React and Django.