pyannote-audio
demucs
pyannote-audio | demucs | |
---|---|---|
15 | 108 | |
5,077 | 7,672 | |
3.4% | 1.2% | |
8.6 | 5.4 | |
3 days ago | 8 days ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pyannote-audio
-
Open Source Libraries
pyannote/pyannote-audio
-
AI Transcribing tool for video with two voices?
Open Source. I've found this to be pretty nice, which is just a wrapper on some hugging face models https://github.com/pyannote/pyannote-audio
-
Show HN: PodText.ai – Search anything said on a podcast, Highlight text to play
(not the creator, but I've built something similar for personal use)
This is a great library for determining which speaker is speaking during each time in an audio file (this is called speaker diarization); I imagine they used it or something like it. Works really well out of the box!
https://github.com/pyannote/pyannote-audio
-
I wanted to use OpenAI's Whisper speech-to-text on my Mac without installing stuff in the Terminal so I made MacWhisper, a free Mac app to transcribe audio and video files for easy transcription and subtitle generation. Would love to hear some feedback on it!
Do you think pyannote could be implemented in the Pro version of the app to support diarization?
- I won several speaker diarization challenges with pyannote.audio
-
I made a free transcription service powered by Whisper AI
Free startup idea: Use Whisper with pyannote-audio[0]’s speaker diarization. Upload a recording, get back a multi-speaker annotated transcription.
Make a JSON API and I’ll be your first customer.
[0] https://github.com/pyannote/pyannote-audio
-
Can Whisper differentiate between different voices?
Whisper can’t, but pyannote-audio can. I’ve seen a couple of prototypes out there which link the two together.
-
[D] Is there a way to distinguish different human voices from 1 audio file ?
You can use pyannote python library. It will identify different speakers from audio and will create small audio files with those speakers.
- Post-Game Analysis: Destiny & Alex VS Andrew & Zen Shapiro
-
A quick and dirty tool for automatically analyzing speaking time in online debates (Effortpost)
This Colab notebook is basically a standard template (with small changes) provided by pyannote-audio, the library implementing the speaker diarization functionality we need. (template)
demucs
-
Best way to extract a vocal stem from a song
I've had the best results from Facebook's DEMUCs. It's not too difficult to install, and I like the sound quality of their mdx_extra model. This is the command line I use (this will use the 2 stem version -- vocals, and everything else)
-
Open Source Libraries
facebookresearch/demucs: Stem seperation
-
Show HN: Improved freemusicdemixer (AI music demixing in the browser)
For those interested, Facebook's Demucs page (https://github.com/facebookresearch/demucs) gives performance comparison for several models including open-unmix.
See also: https://www.stemroller.com This runs as a local app on Windows and Mac.
-
Show HN: Free AI-based music demixing in the browser
Demucs [1], one of the leading/SOTA systems, has an experimental 6-source model, `htdemucs_6s`, which adds piano and guitar:
>We are also releasing an experimental 6 sources model, that adds a guitar and piano source. Quick testing seems to show okay quality for guitar, but a lot of bleeding and artifacts for the piano source.
I also believe Audioshake [2] (a company in the space) is doing guitar separation as well.
1: https://github.com/facebookresearch/demucs
-
Romy & Fred again.. - Strong (Yelow Bootleg Remix) [2023]
I don't know which one /u/DarkMemoria used exactly but I use demucs. If you go to the Colab section you can run it by putting the audio files you want to separate into Google Drive.
-
AI integration has just been teased by Scott on the official forum.
Demucs v4 is the best open source currently, that's also what the snippet from the video sounds like uses
-
Is there anyway I can play along to songs where the original guitar has been muted?
already exist, for example I use demucs to separate songs into 6 tracks, and then I mute what i need to be silenced in any daw.
-
I need help removing vocals
I regularly use demucs (https://github.com/facebookresearch/demucs). It might be overwhelming when you are not used to work with the terminal, but it's as good as all the wrapper sites that ask for payment. Also, there are probably GUI projects that makes it even easier.
-
Are there any websites or programs that can separate vocals and drums from samples?
There's also the open source software https://github.com/facebookresearch/demucs which I assume is what many of the free websites are using behind the scenes. There's a demo site here: https://huggingface.co/spaces/akhaliq/demucs but I haven't tested it for time limits/upload limits etc.
-
[Request] Need help cleaning up an instrumental to play at my wedding.
I used demucs to try and separate the vocals from the instrumental with fairly decent results. When playing the instrumental you can still vaguely hear some remnants of the vocals and I worry when it's played over a real sound system at the wedding it will be very obvious.
What are some alternatives?
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
mdx-net - KUIELAB-MDX-Net got the 2nd place on the Leaderboard A and the 3rd place on the Leaderboard B in the MDX-Challenge ISMIR 2021
speechbrain - A PyTorch-based Speech Toolkit
spleeter-web - Self-hostable web app for isolating the vocal, accompaniment, bass, and drums of any song. Supports Spleeter, D3Net, Demucs, Tasnet, X-UMX. Built with React and Django.
Resemblyzer - A python package to analyze and compare voices with deep learning
Demucs-Gui - A GUI for music separation project demucs
Kaldi Speech Recognition Toolkit - kaldi-asr/kaldi is the official location of the Kaldi project.
spleeter - Deezer source separation library including pretrained models.
inaSpeechSegmenter - CNN-based audio segmentation toolkit. Allows to detect speech, music, noise and speaker gender. Has been designed for large scale gender equality studies based on speech time per gender.
SpleeterGui - Windows desktop front end for Spleeter - AI source separation
uis-rnn - This is the library for the Unbounded Interleaved-State Recurrent Neural Network (UIS-RNN) algorithm, corresponding to the paper Fully Supervised Speaker Diarization.
ultimatevocalremovergui - GUI for a Vocal Remover that uses Deep Neural Networks.