espnet
Resemblyzer
Our great sponsors
espnet | Resemblyzer | |
---|---|---|
15 | 4 | |
7,872 | 2,592 | |
2.5% | 2.0% | |
10.0 | 3.4 | |
about 13 hours ago | 7 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
espnet
-
WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
You might check out this list from espnet. They list the different corpuses they use to train their models sorted by language and task (ASR, TTS etc):
https://github.com/espnet/espnet/blob/master/egs2/README.md
-
[D] What's stopping you from working on speech and voice?
- https://github.com/espnet/espnet
- Íslensk talgervilsrödd sem hægt er að nota á Macca
-
High quality, fast performing, local text to speech generation
This link has instructions for doing this for a Japanese model. It would have to be altered to work with ljspeech and the fine tune dataset.
-
Text to speech generation
This work is made possible by the excellent advancements in text to speech modeling. ESPnet is a great project and should be checked out for more advanced and a wider range of use cases. This pipeline was also made possible by the great work from espnet_onnx in building a framework to export models to ONNX.
-
[P] TorToiSe - a true zero-shot multi-voice TTS engine
CMU WavLab has ESPNet https://espnet.github.io/espnet/ which includes a number of high quality TTS models including VITS (which in my subjective experience is just as good as what is demonstrated here). Also the inference on various ESPNet pretrained TTS models is reasonable and sentences take on average 5 seconds per word to generate the waveform on my totally mid PC setup.
-
How to get Job in NLP?
The reason I'm saying this is to point out that having and in-depth knowledge on speech processing/generation requires a lot of information about signal processing and human speech in general (eg. acoustics and phonetics). However, if you're not into learning everything there is to know about a subject, just take one state-of-the-art example and study that as best as you can. Pick one environment/toolkit, for example espnet and simply go with that.
-
Help picking a good speech recognition library
https://github.com/espnet/espnet (kind of like a newer Kaldi, but also not beginner friendly)
-
speechbrain VS espnet - a user suggested alternative
2 projects | 13 Oct 2021
both provide e2e ASR support but espnet does have more utilities where as speechbarain is clean
-
Need help with training ASR model from scratch.
This is relatively small amount of speech to train the model from scratch, but you can train using another pre-trained model for initialization. There are numbers of end-to-end ASR toolkits which can be used for this: https://github.com/NVIDIA/NeMo and https://github.com/espnet/espnet
Resemblyzer
-
Build an Audio-Driven Speaker Recognition System Using Open-Source Technologies — Resemblyzer and QdrantDB.
Resemblyzer allows us to derive high-level representation of voice through a deep learning model. It simplifies the life of developers by enabling them to convert audio clips into vectors with just a few lines of code, eliminating the need for neural networks. See official github repository.
-
Get timestamps for .wav partials;
I want to perform a modification to Resemblyzer's speaker diarization script to cut out parts of audio where a specific speaker isn't present. While the graph generated by the original demo seems alright, the timestamps at which it chooses to cut the audio are off. I got this conclusion because when I outputted all the timestamp information, my 22 minute long video came out to be 1036 seconds long. Also, the variable I'm indexing the time by seems to be a collection of "wave partials as a list of slices ", as represented by the function that generates its value. Furthermore, the function I was modifying to get the time said that the intervals were non-reliable. This is bad, because as you will see below in my code, when cutting the video with ffmpeg, I treat them as if these were one-to-one with the video: from resemblyzer import preprocess_wav, VoiceEncoder from demo_utils import * from pathlib import Path from os import listdir, system from os.path import join def Diarization(path, file, segments): wav_fpath = Path(join(path, file)) wav = preprocess_wav(wav_fpath) speaker_names = ["Peter"] speaker_wavs = [wav[int(s[0] * sampling_rate):int(s[1]) * sampling_rate] for s in segments] encoder = VoiceEncoder("cpu") print("Running the continuous embedding on cpu, this might take a while...") _, cont_embeds, wav_splits = encoder.embed_utterance(wav, return_partials=True, rate=16) speaker_embeds = [encoder.embed_utterance(speaker_wav) for speaker_wav in speaker_wavs] similarity_dict = {name: cont_embeds @ speaker_embed for name, speaker_embed in zip(speaker_names, speaker_embeds)} times = [((s.start + s.stop) / 2) / sampling_rate for s in wav_splits] keep = True cutTimes = [[times[0], times[len(wav_splits) - 1]]] #similar = open("similarities.txt", "w+") for i in range(len(wav_splits)): similarities = [s[i] for s in similarity_dict.values()] best = np.argmax(similarities) name, similarity = list(similarity_dict.keys())[best], similarities[best] #similar.write(f"{times[i]} - {similarity}\n") if similarity > 0.65: if not keep: cutTimes.append([times[i], times[len(wav_splits) - 1]]) keep = True else: ⠀ if keep: cutTimes[len(cutTimes) - 1][1] = times[i] keep = False #similar.close() cutCommand = "" for num, seg in enumerate(cutTimes): if num == 0: cutCommand += f"between(t,{seg[0]},{seg[1]})" continue cutCommand += f"+between(t,{seg[0]},{seg[1]})" addMe = "Cut - " print(f"ffmpeg -i \"{join(path, file)}\" -af \"aselect='{cutCommand}',asetpts=N/SR/TB\" \"{join(path, addMe+file)}\"") system(f"ffmpeg -y -i \"{join(path, file)}\" -af \"aselect='{cutCommand}',asetpts=N/SR/TB\" \"{join(path, addMe+file)}\"") path = r'C:\Users\mlfre\OneDrive\Desktop\Resemblyzer\Resemblyzer-master\audio_data' for file in listdir(path): #if ".mp3" in file or ".wav" in file or ".mp4" in file: if file == "peter.mp3": segments = [[12, 21]] Diarization(path, file, segments) Since the graph's values were accurate in real time, if I could just manage to get the time intervals accurate in real time as well, I would be golden. Unfortunately, I do not know how to translate from iterating over a list of wav partials as slices, to the length in time of a wave file.
-
[D] state of art for Speaker Diarization?
I've tried Resemblyzer's method, yet it always either cut out too much of his voice, or included too much of others. It also required that i have a clip of him talking, and the quality of that clip heavily impacted its performance.
-
Is there a python based speaker diarization system you would recommend?
Try this: https://github.com/resemble-ai/Resemblyzer
What are some alternatives?
speechbrain - A PyTorch-based Speech Toolkit
pyannote-audio - Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
k2 - FSA/FST algorithms, differentiable, with PyTorch compatibility.
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
kaldi-gstreamer-server - Real-time full-duplex speech recognition server, based on the Kaldi toolkit and the GStreamer framwork.
Kaldi Speech Recognition Toolkit - kaldi-asr/kaldi is the official location of the Kaldi project.
DeepSpeech - DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality
flowtron - Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer
StarGANv2-VC - StarGANv2-VC: A Diverse, Unsupervised, Non-parallel Framework for Natural-Sounding Voice Conversion