pyannote-audio VS Resemblyzer

Compare pyannote-audio vs Resemblyzer and see what are their differences.

Resemblyzer

A python package to analyze and compare voices with deep learning (by resemble-ai)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
pyannote-audio Resemblyzer
15 4
5,077 2,596
4.3% 1.0%
8.6 3.4
3 days ago 7 months ago
Jupyter Notebook Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

pyannote-audio

Posts with mentions or reviews of pyannote-audio. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-02.

Resemblyzer

Posts with mentions or reviews of Resemblyzer. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-04-06.
  • Build an Audio-Driven Speaker Recognition System Using Open-Source Technologies — Resemblyzer and QdrantDB.
    1 project | dev.to | 18 Jan 2024
    Resemblyzer allows us to derive high-level representation of voice through a deep learning model. It simplifies the life of developers by enabling them to convert audio clips into vectors with just a few lines of code, eliminating the need for neural networks. See official github repository.
  • Get timestamps for .wav partials;
    1 project | /r/learnpython | 10 Sep 2021
    I want to perform a modification to Resemblyzer's speaker diarization script to cut out parts of audio where a specific speaker isn't present. While the graph generated by the original demo seems alright, the timestamps at which it chooses to cut the audio are off. I got this conclusion because when I outputted all the timestamp information, my 22 minute long video came out to be 1036 seconds long. Also, the variable I'm indexing the time by seems to be a collection of "wave partials as a list of slices ", as represented by the function that generates its value. Furthermore, the function I was modifying to get the time said that the intervals were non-reliable. This is bad, because as you will see below in my code, when cutting the video with ffmpeg, I treat them as if these were one-to-one with the video: from resemblyzer import preprocess_wav, VoiceEncoder from demo_utils import * from pathlib import Path from os import listdir, system from os.path import join def Diarization(path, file, segments): wav_fpath = Path(join(path, file)) wav = preprocess_wav(wav_fpath) speaker_names = ["Peter"] speaker_wavs = [wav[int(s[0] * sampling_rate):int(s[1]) * sampling_rate] for s in segments] encoder = VoiceEncoder("cpu") print("Running the continuous embedding on cpu, this might take a while...") _, cont_embeds, wav_splits = encoder.embed_utterance(wav, return_partials=True, rate=16) speaker_embeds = [encoder.embed_utterance(speaker_wav) for speaker_wav in speaker_wavs] similarity_dict = {name: cont_embeds @ speaker_embed for name, speaker_embed in zip(speaker_names, speaker_embeds)} times = [((s.start + s.stop) / 2) / sampling_rate for s in wav_splits] keep = True cutTimes = [[times[0], times[len(wav_splits) - 1]]] #similar = open("similarities.txt", "w+") for i in range(len(wav_splits)): similarities = [s[i] for s in similarity_dict.values()] best = np.argmax(similarities) name, similarity = list(similarity_dict.keys())[best], similarities[best] #similar.write(f"{times[i]} - {similarity}\n") if similarity > 0.65: if not keep: cutTimes.append([times[i], times[len(wav_splits) - 1]]) keep = True else: ⠀ if keep: cutTimes[len(cutTimes) - 1][1] = times[i] keep = False #similar.close() cutCommand = "" for num, seg in enumerate(cutTimes): if num == 0: cutCommand += f"between(t,{seg[0]},{seg[1]})" continue cutCommand += f"+between(t,{seg[0]},{seg[1]})" addMe = "Cut - " print(f"ffmpeg -i \"{join(path, file)}\" -af \"aselect='{cutCommand}',asetpts=N/SR/TB\" \"{join(path, addMe+file)}\"") system(f"ffmpeg -y -i \"{join(path, file)}\" -af \"aselect='{cutCommand}',asetpts=N/SR/TB\" \"{join(path, addMe+file)}\"") path = r'C:\Users\mlfre\OneDrive\Desktop\Resemblyzer\Resemblyzer-master\audio_data' for file in listdir(path): #if ".mp3" in file or ".wav" in file or ".mp4" in file: if file == "peter.mp3": segments = [[12, 21]] Diarization(path, file, segments) Since the graph's values were accurate in real time, if I could just manage to get the time intervals accurate in real time as well, I would be golden. Unfortunately, I do not know how to translate from iterating over a list of wav partials as slices, to the length in time of a wave file.
  • [D] state of art for Speaker Diarization?
    3 projects | /r/MachineLearning | 6 Apr 2021
    I've tried Resemblyzer's method, yet it always either cut out too much of his voice, or included too much of others. It also required that i have a clip of him talking, and the quality of that clip heavily impacted its performance.
  • Is there a python based speaker diarization system you would recommend?
    2 projects | /r/LanguageTechnology | 14 Mar 2021
    Try this: https://github.com/resemble-ai/Resemblyzer

What are some alternatives?

When comparing pyannote-audio and Resemblyzer you can also consider the following projects:

NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)

speechbrain - A PyTorch-based Speech Toolkit

espnet - End-to-End Speech Processing Toolkit

Kaldi Speech Recognition Toolkit - kaldi-asr/kaldi is the official location of the Kaldi project.

inaSpeechSegmenter - CNN-based audio segmentation toolkit. Allows to detect speech, music, noise and speaker gender. Has been designed for large scale gender equality studies based on speech time per gender.

uis-rnn - This is the library for the Unbounded Interleaved-State Recurrent Neural Network (UIS-RNN) algorithm, corresponding to the paper Fully Supervised Speaker Diarization.

segmentation_models.pytorch - Segmentation models with pretrained backbones. PyTorch.

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

SincNet - SincNet is a neural architecture for efficiently processing raw audio samples.

Wave-U-Net-for-Speech-Enhancement - Implement Wave-U-Net by PyTorch, and migrate it to the speech enhancement.

Retrieval-based-Voice-Conversion-WebUI - Easily train a good VC model with voice data <= 10 mins!