free-music-demixer
EfficientAT
free-music-demixer | EfficientAT | |
---|---|---|
7 | 1 | |
323 | 183 | |
- | - | |
8.0 | 6.7 | |
about 1 month ago | 10 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
free-music-demixer
- Ask HN: What are some of the best user experiences with AI?
-
Free-music-demixer adds multi-threading to run Demucs faster in the browser
Hi HN,
Over the Christmas break I added multi-threading to the WASM Demucs module in freemusicdemixer
Demucs (v4 hybrid transformer) is a much higher quality model than the previous default, but it ran very slowly when limited to one worker: ~17 minutes for an average 4-minute song
I have since implemented multi-threading with WebWorkers.
If you raise the "MAX MEMORY" setting to 16 GB or 32 GB, your track will demix within 7-5 minutes, producing state-of-the-art results.
There is also support for the Demucs 6-source model which adds piano and guitar stems.
Please reach out and be loud about any bugs or UX issues you encounter!: https://github.com/sevagh/free-music-demixer/issues
- Show HN: Improved freemusicdemixer – AI music demixing in the browser
- Show HN: Improved freemusicdemixer (AI music demixing in the browser)
- FLaNK Stack Weekly for 17 July 2023
-
Show HN: Free AI-based music demixing in the browser
* Post-processing step (bigger impact)
I tried to tackle the post-processing step in my C++ code (which would win ~1 dB in quality across all targets) but it's too tricky for now [2]. Maybe some other day.
1: https://github.com/sevagh/free-music-demixer/blob/main/examp...
2: https://github.com/sigsep/open-unmix-pytorch/blob/master/ope...
EfficientAT
-
Show HN: Free AI-based music demixing in the browser
Interesting, I attempted to do the same as you but stopped just shy of BPM matching.
However I did get sound similarity working using an audio tagging neural net [1]. I chopped off the first and last 15 seconds of every song in my collection and ran them all through this analysis which produces a ~520 dimensional vector. I then targeted specific endings I wanted to match and used Euclidian distance to find the closest matching song beginning.
YMMV but I thought it actually worked pretty well, I just never got to automating the BPM matching. I can try to look for my old script if you're interested :)
[1] https://github.com/fschmid56/EfficientAT
What are some alternatives?
danswer - Gen-AI Chat for Teams - Think ChatGPT if it had access to your team's unique knowledge.
open-unmix-pytorch - Open-Unmix - Music Source Separation for PyTorch
1000sharks.xyz - AI "metal artist" with SampleRNN (mirror from GitLab)
heimdall - Dashboard for operating Flink jobs and deployments.
umx.cpp - C++17 port of Open-Unmix-PyTorch with streaming LSTM inference, ggml, quantization, and Eigen
dt - dt - duct tape for your unix pipes
spleeter - Deezer source separation library including pretrained models.
video2dataset - Easily create large video dataset from video urls
demucs - Code for the paper Hybrid Spectrogram and Waveform Source Separation, but the goddamm motherfucker doesn't work.
khoj - Your AI second brain. A copilot to get answers to your questions, whether they be from your own notes or from the internet. Use powerful, online (e.g gpt4) or private, local (e.g mistral) LLMs. Self-host locally or use our web app. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.
pytorch-image-models - PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more