bark
pyannote-audio
bark | pyannote-audio | |
---|---|---|
67 | 15 | |
32,784 | 5,123 | |
3.8% | 5.2% | |
5.4 | 8.6 | |
8 days ago | 2 days ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bark
-
Exploring Bark, the Open Source Text-to-Speech Model
!pip install git+https://github.com/suno-ai/bark.git
-
AI-generated sad girl with piano performs the text of the MIT License
To my knowledge, the model being used for this is "chirp" which is 'based on' bark[1], an AI text to speech model.
The github page for bark links to a page about chirp, which returns a 404 page for me [2]. that the model for suno.ai's song generator isn't too much different than the text to speech model.
My hunch is that it was something like a coincidence that the bark model was capable of producing music, and that was spun off into this product. Unfortunately, there seems to still be issues with bark when generating long (like book length) spoken audio. Which is too bad, as someone who's worked jobs that require lots of driving, it would be awesome to be able to have any text read to me in a natural sounding voice.
[1]https://github.com/suno-ai/bark
-
Generating music in the waveform domain (2020)
Stable-audio and MusicGen sounds better than Jukebox.
But the best so far is Suno.ai ( https://app.suno.ai ) especially with their V3 model they have very impressive results, the fidelity is not studio quality but they're getting very close.
It's very likely based on their TTS model they have released before Bark, but trained on more data and with higher resolution.
https://github.com/suno-ai/bark
-
Stable-Audio-Demo
https://github.com/suno-ai/bark
> Bark was developed for research purposes. It is not a conventional text-to-speech model but instead a fully generative text-to-audio model, which can deviate in unexpected ways from provided prompts. Suno does not take responsibility for any output generated. Use at your own risk, and please act responsibly.
I've generated probably >200 songs now with Suno, of which perhaps 10 have been any good, and I can't detect any pattern in terms of the outputs.
Here's another one which is pretty good. I accidentally copied and pasted the prompt and lyrics, and it's amazing to me how 'musically' it renders the prompt:
-
Suno AI
hahah wow! cool :-)
PS: OT, I am reading this Bark thing(https://github.com/suno-ai/bark). Can I run it locally on a Macbook 2015 with 8GB RAM?
-
SDXL + SVD + Suno AI
I have it locally. The model is on huggingface. It runs with about 8GB VRAM.
- [discussion] text to voice generation for textbooks
-
Open Source Libraries
suno-ai/bark
- Weird A.I. Yankovic, a cursed deep dive into the world of voice cloning
- FLaNK Stack Weekly 2 October 2023
pyannote-audio
-
Open Source Libraries
pyannote/pyannote-audio
-
AI Transcribing tool for video with two voices?
Open Source. I've found this to be pretty nice, which is just a wrapper on some hugging face models https://github.com/pyannote/pyannote-audio
-
Show HN: PodText.ai ā Search anything said on a podcast, Highlight text to play
(not the creator, but I've built something similar for personal use)
This is a great library for determining which speaker is speaking during each time in an audio file (this is called speaker diarization); I imagine they used it or something like it. Works really well out of the box!
https://github.com/pyannote/pyannote-audio
-
I wanted to use OpenAI's Whisper speech-to-text on my Mac without installing stuff in the Terminal so I made MacWhisper, a free Mac app to transcribe audio and video files for easy transcription and subtitle generation. Would love to hear some feedback on it!
Do you think pyannote could be implemented in the Pro version of the app to support diarization?
- I won several speaker diarization challenges with pyannote.audio
-
I made a free transcription service powered by Whisper AI
Free startup idea: Use Whisper with pyannote-audio[0]ās speaker diarization. Upload a recording, get back a multi-speaker annotated transcription.
Make a JSON API and Iāll be your first customer.
[0] https://github.com/pyannote/pyannote-audio
-
Can Whisper differentiate between different voices?
Whisper canāt, but pyannote-audio can. Iāve seen a couple of prototypes out there which link the two together.
-
[D] Is there a way to distinguish different human voices from 1 audio file ?
You can use pyannote python library. It will identify different speakers from audio and will create small audio files with those speakers.
- Post-Game Analysis: Destiny & Alex VS Andrew & Zen Shapiro
-
A quick and dirty tool for automatically analyzing speaking time in online debates (Effortpost)
This Colab notebook is basically a standard template (with small changes) provided by pyannote-audio, the library implementing the speaker diarization functionality we need. (template)
What are some alternatives?
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
SadTalker - [CVPR 2023] SadTalkerļ¼Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
speechbrain - A PyTorch-based Speech Toolkit
Retrieval-based-Voice-Conversion-WebUI - Easily train a good VC model with voice data <= 10 mins!
Resemblyzer - A python package to analyze and compare voices with deep learning
whisper.cpp - Port of OpenAI's Whisper model in C/C++
Kaldi Speech Recognition Toolkit - kaldi-asr/kaldi is the official location of the Kaldi project.
TTS - šøš¬ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
inaSpeechSegmenter - CNN-based audio segmentation toolkit. Allows to detect speech, music, noise and speaker gender. Has been designed for large scale gender equality studies based on speech time per gender.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
uis-rnn - This is the library for the Unbounded Interleaved-State Recurrent Neural Network (UIS-RNN) algorithm, corresponding to the paper Fully Supervised Speaker Diarization.