piper
whisper
piper | whisper | |
---|---|---|
39 | 344 | |
4,075 | 60,617 | |
14.0% | 2.6% | |
8.6 | 6.4 | |
4 days ago | 3 days ago | |
C++ | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
piper
-
ESpeak-ng: speech synthesizer with more than one hundred languages and accents
After some brief research it seems the issue you're seeing may be a known bug in at least some versions/release of espeak-ng.
Here's some potentially related links if you'd like to dig deeper:
* "questions about mandarin data packet #1044": https://github.com/espeak-ng/espeak-ng/issues/1044
* "ESpeak NJ-1.51’s Mandarin pronunciation is corrupted #12952": https://github.com/nvaccess/nvda/issues/12952
* "The pronunciation of Mandarin Chinese using ESpeak NJ in NVDA is not normal #1028": https://github.com/espeak-ng/espeak-ng/issues/1028
* "When espeak-ng translates Chinese (cmn), IPA tone symbols are not output correctly #305": https://github.com/rhasspy/piper/issues/305
* "Please default ESpeak NG's voice role to 'Chinese (Mandarin, latin as Pinyin)' for Chinese to fix #12952 #13572": https://github.com/nvaccess/nvda/issues/13572
* "Cmn voice not correctly translated #1370": https://github.com/espeak-ng/espeak-ng/issues/1370
-
WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
If you're not already aware, the primary developer of Mimic 3 (and its non-Mimic predecessor Larynx) continued TTS-related development with Larynx and the renamed project Piper: https://github.com/rhasspy/piper
Last year Piper development was supported by Nabu Casa for their "Year of Voice" project for Home Assistant and it sounds like Mike Hansen is going to continue on it with their support this year.
-
Coqui.ai Is Shutting Down
Coqui-ai was a commercial continuation of Mozilla TTS and STT (https://github.com/mozilla/TTS).
At the time (2018-ish), it was really impressive for on-device voice synthesis (with a quality approaching the Google and Azure cloud-based voice synthesis options) and open source, so a lot of people in the FOSS community were hoping it could be used for a privacy-respecting home assistant, Linux speech synthesis that doesn't suck, etc.
After Mozilla abandoned the project, Coqui continued development and had some really impressive one-shot voice cloning, but pivoted to marketing speech synthesis for game developers. They were probably having trouble monetizing it, and it doesn't surprise me that they shut down.
An equivalent project that's still in active development and doing really well is Piper TTS (https://github.com/rhasspy/piper).
-
OpenVoice: Versatile Instant Voice Cloning
There isn't an ElevenLabs app like that, but I think that's the most expedient method, by far.
(details and warning: in-depth, opinionated take, written almost for my own benefit, I've done a lot of work near here recently but haven't had to organize my thoughts until now)
Why? Local inference is hard. You need two things: the clips to voice model (which we have here, but bleeding edge), and text + voice -> speech model.
Text to voice to speech, locally, has excellent prior art for me, in the form of a Raspberry Pi-based ONNX inference library called [Piper](https://github.com/rhasspy/piper). I should just be able to copy that, about an afternoon of work!
Except...when these models are trained, they encode plaintext to model input using a library called eSpeak. eSpeak is basically f(plaintext) => ints representing phonemes. eSpeak is a C library and written in a style I haven't seen in a while and depends on other C libraries. So I end up needing to port like 20K lines of C to Dart...or I could use WASM, but over the last year, I lost the ability to be able to reason through how to get WASM running in Dart, both native and web.
It's a really annoying technical problem: the speech models all use this eSpeak C library to turn plaintext => model input (tokenized phonemes).
Re: ElevenLabs
I had looked into the API months ago and vaguely remembered it was _very_ complete.
I spent the last hour or two playing with it, and reconfirmed that. They have enough API surface that you could build an API that took voice recordings, created a voice, and then did POSTs / socket connection to get audio data from that voice at will.
Only issue is pricing IMHO, $0.18 for 1000 characters. :/ But this is something I feel very comfortable saying wouldn't be _that_ much work to build and open source with a "bring your own API key" type thing. I had forgotten about Eleven Labs till your post, which made me realize there was an actually meaningful and quite moving use case for it.
-
Hello guys, any selfhosted alternative to eleven labs?
piper (https://github.com/rhasspy/piper)
-
[D] What offline TTS Model is good enough for a realistic real-time task?
I have been using piper-tts and it is GREAT and super lightweight / easy to use. On a 2080 I'm sure you can use the HQ models no worries!
-
Easy implement TTS libary for cpp
So i found some library and one which is from github and have read.me or good documentation called piper (https://github.com/rhasspy/piper) so apparently this library is for rasbery pi and yes there is TXT function and i need to modify again to make it more simple but my simple project don't need this kind of big complex libary and all i need is what i said before just a function that can output sound from computer using c++ libary.
-
Piper-whistle – Tool for piper TTS voice model management
piper-whistle is a tool to manage voices used with the piper (https://github.com/rhasspy/piper) speech synthesizer. Main motivation was to download and reference models in a structured way. You may browse the docs online at https://think-biq.gitlab.io/piper-whistle/
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
You may want to try Piper for this case (RPi 4): https://github.com/rhasspy/piper
- Piper: A fast, local neural text to speech system
whisper
- Creando Subtítulos Automáticos para Vídeos con Python, Faster-Whisper, FFmpeg, Streamlit, Pillow
-
Why I Care Deeply About Web Accessibility And You Should Too
Let’s not talk about local models as the hardware requirements are way beyond most of these people’s reach. I have a MacBook Air with an M2 chip and 8GB of RAM and can hardly run Whisper locally, so I use this HuggingFace space.
-
How I built NotesGPT – a full-stack AI voice note app
Last week, I launched notesGPT, a free and open source voice note app that has 35,000 visitors, 7,000 users, and over 1,000 GitHub stars so far in the last week. It allows you to record a voice note, transcribes it uses Whisper, and uses Mixtral via Together to extract action items and display them in an action items view. It’s also fully open source and comes equipped with authentication, storage, vector search, action items, and is fully responsive on mobile for ease of use.
-
Ask HN: Can AI break a speech audio into individual words?
I found a pretty good discussion in the topic here:
https://github.com/openai/whisper/discussions/1243
-
WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
There is a plot of language performance on their repo: https://github.com/openai/whisper
I am not aware of a multi-lingual leaderboard for speech recognition models.
- Ask HN: AI that allows you to make phone calls in a language you don't speak?
-
Ask HN: Favorite Podcast Episodes of 2023?
I don't know how OP does it, but here's how I'd do it:
* Generate a transcript by runing Whisper against the podcast audio file: https://github.com/openai/whisper
* Upload transcript to ChatGPT and ask it to summarize.
* Automate all the above.
-
Need advice
Ahh, that makes sense. I've been building something like that, but only from other languages into English using Whisper
-
Subtitle is now open-source
Whisper already generates subtitles[0], supporting VTT and SRT so this is just a thin wrapper around that.
[0]: https://github.com/openai/whisper/blob/e58f28804528831904c3b...
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
> although it does require you to wear headphones so the bot doesn't hear itself and get interrupted.
Maybe you can rely on some sort of speaker identification to sort this out?
https://github.com/openai/whisper/discussions/264
What are some alternatives?
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
silero-vad - Silero VAD: pre-trained enterprise-grade Voice Activity Detector
espeak-ng - eSpeak NG is an open source speech synthesizer that supports more than hundred languages and accents.
buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
silero-models - Silero Models: pre-trained speech-to-text, text-to-speech and text-enhancement models made embarrassingly simple
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
mimic3 - A fast local neural text to speech engine for Mycroft
whisper.cpp - Port of OpenAI's Whisper model in C/C++
willow - Open source, local, and self-hosted Amazon Echo/Google Home competitive Voice Assistant alternative
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.