espeak-ng VS piper

Compare espeak-ng vs piper and see what are their differences.

espeak-ng

eSpeak NG is an open source speech synthesizer that supports more than hundred languages and accents. (by espeak-ng)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
espeak-ng piper
25 33
2,858 3,902
5.3% 17.6%
7.2 8.9
6 days ago 5 days ago
C C++
GNU General Public License v3.0 only MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

espeak-ng

Posts with mentions or reviews of espeak-ng. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-01.
  • IAMA senior javascript dev, ask me anything
    7 projects | /r/learnjavascript | 1 Jul 2023
    I'm skeptical about a senior JavaScript developer claiming to be bored. Nonetheless, let's see. How would you go about modifying [this](ng/blob/master/emscripten/espeakng_glue.idl) IDL file, this C++ glue code, and the relevant Make file to compile eSpeak NG to JavaScript with Emscripten with SSML support enabled?
  • Is there a good text to speech program for linux?
    6 projects | /r/linux | 22 Jun 2023
    eSpeak NG supports running on Linux, BSD, Mac, Android, Windows, has been compiled to WASM with Emscripten. See also espeak and meSpeak.js.
  • Vietnamese Phonology
    1 project | /r/VulgarLang | 22 Jun 2023
    I may have a solution, BUT I'm at an airport right now, so... Perhaps tonight I can give you some ideas. There is a program I used to make a few presets for myself. https://github.com/espeak-ng/espeak-ng/blob/master/docs/languages.md
  • [P] Balacoon: Fastest neural text-to-speech on CPU
    1 project | /r/LanguageTechnology | 15 Apr 2023
    For this one, I used espeak (https://github.com/espeak-ng/espeak-ng) as a text processor. It is almost 17 years old software and is pretty lacking, unfortunately. On the other hand, it's super fast and supports tens of languages. Long story short, punctuation introduces phrase break with a pause of fixed length, and capitalization is ignored.
  • Balacoon: python package for text-to-speech
    4 projects | /r/Python | 13 Apr 2023
    I didnt not release trainy parts to build voices. I am considering, but there is so many packages already (coqui, espnet, piper, nemo, fairseq to name a few) that i focused on usability for now. Support for new languages is a different question. Everyone wants to train fancy neural nets. But support for new language is about writing rules and having language expertise. I did it for English (https://github.com/balacoon/en_us_normalization/tree/c1019cf878aa6baf25d6fff719cf418cca5a3107/production/classify). Doing it for all the other languages would probably take me a lifetime. Other speech synthesis solutions use 17-years old espeak for this purpose (https://github.com/espeak-ng/espeak-ng/blob/master/docs/languages.md). I introduced the fallback to it in Balacoon too. But generally, it is outdated technology and I believe we should do better.
  • Is there a good audio-to-IPA phone app that doesn’t assume a particular language?
    1 project | /r/linguistics | 10 Apr 2023
    espeak-ng works by first converting text to IPA and then pronouncing that. But im not sure im aware of a way to input arbitrary IPA, and also the quality is probably too low for you.
  • I Created A Web Speech API NPM Package Called SpeechKit
    7 projects | /r/javascript | 23 Feb 2023
    There are espeak-ng https://github.com/espeak-ng/espeak-ng and pocketsphinx https://github.com/cmusphinx/pocketsphinx which can be used locally without making external requests.
  • Which languages have readily available IPA equivalents to learn from?
    1 project | /r/linguistics | 5 Feb 2023
    There are automatic tools to convert a written form of a language to IPA, I'm personally aware of espeak-ng, which supports* a lot of languages.
  • Ask HN: Are there any good open source Text-to-Speech tools?
    15 projects | news.ycombinator.com | 1 Jan 2023
    I've had good luck with https://github.com/espeak-ng/espeak-ng (for very specific purposes, and I was willing to wrangle IPA)
  • Node.js Native Messaging host
    5 projects | /r/node | 9 Oct 2022
    Web Speech API does not provide a means to capture audio output of speechSynthesis.speak(new SpeechSynthesis.speak()). Using Native Messaging we start a local server, send input text or SSML to the local server with fetch(), pass the input data to local speech synthesis engine, in this case espeak-ng, get response back as WAV in the browser, which we parse to Float32Array and write to a MediaStreamTrackGenerator which we then output speakers and/or share with peers (https://github.com/guest271314/native-messaging-espeak-ng; https://github.com/espeak-ng/espeak-ng/tree/master/chromium_extension).

piper

Posts with mentions or reviews of piper. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-17.
  • WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
    9 projects | news.ycombinator.com | 17 Jan 2024
    If you're not already aware, the primary developer of Mimic 3 (and its non-Mimic predecessor Larynx) continued TTS-related development with Larynx and the renamed project Piper: https://github.com/rhasspy/piper

    Last year Piper development was supported by Nabu Casa for their "Year of Voice" project for Home Assistant and it sounds like Mike Hansen is going to continue on it with their support this year.

  • Coqui.ai Is Shutting Down
    4 projects | news.ycombinator.com | 3 Jan 2024
    Coqui-ai was a commercial continuation of Mozilla TTS and STT (https://github.com/mozilla/TTS).

    At the time (2018-ish), it was really impressive for on-device voice synthesis (with a quality approaching the Google and Azure cloud-based voice synthesis options) and open source, so a lot of people in the FOSS community were hoping it could be used for a privacy-respecting home assistant, Linux speech synthesis that doesn't suck, etc.

    After Mozilla abandoned the project, Coqui continued development and had some really impressive one-shot voice cloning, but pivoted to marketing speech synthesis for game developers. They were probably having trouble monetizing it, and it doesn't surprise me that they shut down.

    An equivalent project that's still in active development and doing really well is Piper TTS (https://github.com/rhasspy/piper).

  • OpenVoice: Versatile Instant Voice Cloning
    10 projects | news.ycombinator.com | 1 Jan 2024
    There isn't an ElevenLabs app like that, but I think that's the most expedient method, by far.

    (details and warning: in-depth, opinionated take, written almost for my own benefit, I've done a lot of work near here recently but haven't had to organize my thoughts until now)

    Why? Local inference is hard. You need two things: the clips to voice model (which we have here, but bleeding edge), and text + voice -> speech model.

    Text to voice to speech, locally, has excellent prior art for me, in the form of a Raspberry Pi-based ONNX inference library called [Piper](https://github.com/rhasspy/piper). I should just be able to copy that, about an afternoon of work!

    Except...when these models are trained, they encode plaintext to model input using a library called eSpeak. eSpeak is basically f(plaintext) => ints representing phonemes. eSpeak is a C library and written in a style I haven't seen in a while and depends on other C libraries. So I end up needing to port like 20K lines of C to Dart...or I could use WASM, but over the last year, I lost the ability to be able to reason through how to get WASM running in Dart, both native and web.

    It's a really annoying technical problem: the speech models all use this eSpeak C library to turn plaintext => model input (tokenized phonemes).

    Re: ElevenLabs

    I had looked into the API months ago and vaguely remembered it was _very_ complete.

    I spent the last hour or two playing with it, and reconfirmed that. They have enough API surface that you could build an API that took voice recordings, created a voice, and then did POSTs / socket connection to get audio data from that voice at will.

    Only issue is pricing IMHO, $0.18 for 1000 characters. :/ But this is something I feel very comfortable saying wouldn't be _that_ much work to build and open source with a "bring your own API key" type thing. I had forgotten about Eleven Labs till your post, which made me realize there was an actually meaningful and quite moving use case for it.

  • Hello guys, any selfhosted alternative to eleven labs?
    3 projects | /r/selfhosted | 11 Dec 2023
    piper (https://github.com/rhasspy/piper)
  • [D] What offline TTS Model is good enough for a realistic real-time task?
    2 projects | /r/MachineLearning | 10 Dec 2023
    I have been using piper-tts and it is GREAT and super lightweight / easy to use. On a 2080 I'm sure you can use the HQ models no worries!
  • Easy implement TTS libary for cpp
    1 project | /r/cpp_questions | 7 Dec 2023
    So i found some library and one which is from github and have read.me or good documentation called piper (https://github.com/rhasspy/piper) so apparently this library is for rasbery pi and yes there is TXT function and i need to modify again to make it more simple but my simple project don't need this kind of big complex libary and all i need is what i said before just a function that can output sound from computer using c++ libary.
  • Piper-whistle – Tool for piper TTS voice model management
    4 projects | news.ycombinator.com | 5 Dec 2023
    piper-whistle is a tool to manage voices used with the piper (https://github.com/rhasspy/piper) speech synthesizer. Main motivation was to download and reference models in a structured way. You may browse the docs online at https://think-biq.gitlab.io/piper-whistle/
  • StyleTTS2 – open-source Eleven Labs quality Text To Speech
    10 projects | news.ycombinator.com | 19 Nov 2023
    You may want to try Piper for this case (RPi 4): https://github.com/rhasspy/piper
  • Piper: A fast, local neural text to speech system
    1 project | news.ycombinator.com | 4 Nov 2023
  • Open Source Libraries
    25 projects | /r/AudioAI | 2 Oct 2023
    rhasspy/piper

What are some alternatives?

When comparing espeak-ng and piper you can also consider the following projects:

RHVoice - a free and open source speech synthesizer for Russian and other languages

tortoise-tts - A multi-voice TTS system trained with an emphasis on quality

TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production

scrcpy - Display and control your Android device

silero-models - Silero Models: pre-trained speech-to-text, text-to-speech and text-enhancement models made embarrassingly simple

aeneas - aeneas is a Python/C library and a set of tools to automagically synchronize audio and text (aka forced alignment)

mimic3 - A fast local neural text to speech engine for Mycroft

SAM - Software Automatic Mouth - Tiny Speech Synthesizer

willow - Open source, local, and self-hosted Amazon Echo/Google Home competitive Voice Assistant alternative

RealTimeSingingSynthesizer - Live Coding Singing Synthesizer. Python sinsy-NG wrapper.

GoogleNetworkSpeechSynthesis - Google's Network Speech Synthesis: Bring your own Google API key and proxy