espnet VS Resemblyzer

Compare espnet vs Resemblyzer and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
espnet Resemblyzer
15 4
7,872 2,592
2.5% 2.0%
10.0 3.4
about 13 hours ago 7 months ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

espnet

Posts with mentions or reviews of espnet. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-17.

Resemblyzer

Posts with mentions or reviews of Resemblyzer. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-04-06.
  • Build an Audio-Driven Speaker Recognition System Using Open-Source Technologies — Resemblyzer and QdrantDB.
    1 project | dev.to | 18 Jan 2024
    Resemblyzer allows us to derive high-level representation of voice through a deep learning model. It simplifies the life of developers by enabling them to convert audio clips into vectors with just a few lines of code, eliminating the need for neural networks. See official github repository.
  • Get timestamps for .wav partials;
    1 project | /r/learnpython | 10 Sep 2021
    I want to perform a modification to Resemblyzer's speaker diarization script to cut out parts of audio where a specific speaker isn't present. While the graph generated by the original demo seems alright, the timestamps at which it chooses to cut the audio are off. I got this conclusion because when I outputted all the timestamp information, my 22 minute long video came out to be 1036 seconds long. Also, the variable I'm indexing the time by seems to be a collection of "wave partials as a list of slices ", as represented by the function that generates its value. Furthermore, the function I was modifying to get the time said that the intervals were non-reliable. This is bad, because as you will see below in my code, when cutting the video with ffmpeg, I treat them as if these were one-to-one with the video: from resemblyzer import preprocess_wav, VoiceEncoder from demo_utils import * from pathlib import Path from os import listdir, system from os.path import join def Diarization(path, file, segments): wav_fpath = Path(join(path, file)) wav = preprocess_wav(wav_fpath) speaker_names = ["Peter"] speaker_wavs = [wav[int(s[0] * sampling_rate):int(s[1]) * sampling_rate] for s in segments] encoder = VoiceEncoder("cpu") print("Running the continuous embedding on cpu, this might take a while...") _, cont_embeds, wav_splits = encoder.embed_utterance(wav, return_partials=True, rate=16) speaker_embeds = [encoder.embed_utterance(speaker_wav) for speaker_wav in speaker_wavs] similarity_dict = {name: cont_embeds @ speaker_embed for name, speaker_embed in zip(speaker_names, speaker_embeds)} times = [((s.start + s.stop) / 2) / sampling_rate for s in wav_splits] keep = True cutTimes = [[times[0], times[len(wav_splits) - 1]]] #similar = open("similarities.txt", "w+") for i in range(len(wav_splits)): similarities = [s[i] for s in similarity_dict.values()] best = np.argmax(similarities) name, similarity = list(similarity_dict.keys())[best], similarities[best] #similar.write(f"{times[i]} - {similarity}\n") if similarity > 0.65: if not keep: cutTimes.append([times[i], times[len(wav_splits) - 1]]) keep = True else: ⠀ if keep: cutTimes[len(cutTimes) - 1][1] = times[i] keep = False #similar.close() cutCommand = "" for num, seg in enumerate(cutTimes): if num == 0: cutCommand += f"between(t,{seg[0]},{seg[1]})" continue cutCommand += f"+between(t,{seg[0]},{seg[1]})" addMe = "Cut - " print(f"ffmpeg -i \"{join(path, file)}\" -af \"aselect='{cutCommand}',asetpts=N/SR/TB\" \"{join(path, addMe+file)}\"") system(f"ffmpeg -y -i \"{join(path, file)}\" -af \"aselect='{cutCommand}',asetpts=N/SR/TB\" \"{join(path, addMe+file)}\"") path = r'C:\Users\mlfre\OneDrive\Desktop\Resemblyzer\Resemblyzer-master\audio_data' for file in listdir(path): #if ".mp3" in file or ".wav" in file or ".mp4" in file: if file == "peter.mp3": segments = [[12, 21]] Diarization(path, file, segments) Since the graph's values were accurate in real time, if I could just manage to get the time intervals accurate in real time as well, I would be golden. Unfortunately, I do not know how to translate from iterating over a list of wav partials as slices, to the length in time of a wave file.
  • [D] state of art for Speaker Diarization?
    3 projects | /r/MachineLearning | 6 Apr 2021
    I've tried Resemblyzer's method, yet it always either cut out too much of his voice, or included too much of others. It also required that i have a clip of him talking, and the quality of that clip heavily impacted its performance.
  • Is there a python based speaker diarization system you would recommend?
    2 projects | /r/LanguageTechnology | 14 Mar 2021
    Try this: https://github.com/resemble-ai/Resemblyzer

What are some alternatives?

When comparing espnet and Resemblyzer you can also consider the following projects:

speechbrain - A PyTorch-based Speech Toolkit

pyannote-audio - Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding

NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)

k2 - FSA/FST algorithms, differentiable, with PyTorch compatibility.

fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

kaldi-gstreamer-server - Real-time full-duplex speech recognition server, based on the Kaldi toolkit and the GStreamer framwork.

Kaldi Speech Recognition Toolkit - kaldi-asr/kaldi is the official location of the Kaldi project.

DeepSpeech - DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.

tortoise-tts - A multi-voice TTS system trained with an emphasis on quality

flowtron - Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer

StarGANv2-VC - StarGANv2-VC: A Diverse, Unsupervised, Non-parallel Framework for Natural-Sounding Voice Conversion