larynx-dialogue VS larynx

Compare larynx-dialogue vs larynx and see what are their differences.

larynx

End to end text to speech system using gruut and onnx (by rhasspy)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
larynx-dialogue larynx
4 18
- 788
- -
- 0.0
- 12 months ago
Python
- MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

larynx-dialogue

Posts with mentions or reviews of larynx-dialogue. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-01.
  • ESpeak-ng: speech synthesizer with more than one hundred languages and accents
    21 projects | news.ycombinator.com | 1 May 2024
    Based on my own recent experience[0] with espeak-ng, IMO the project is currently in a really tough situation[3]:

    * the project seems to provide real value to a huge number of people who rely on it for reasons of accessibility (even more so for non-English languages); and,

    * the project is a valuable trove of knowledge about multiple languages--collected & refined over multiple decades by both linguistic specialists and everyday speakers/readers; but...

    * the project's code base is very much of "a different era" reflecting its mid-90s origins (on RISC OS, no less :) ) and a somewhat piecemeal development process over the following decades--due in part to a complex Venn diagram of skills, knowledge & familiarity required to make modifications to it.

    Perhaps the prime example of the last point is that `espeak-ng` has a hand-rolled XML parser--which attempts to handle both valid & invalid SSML markup--and markup parsing is interleaved with internal language-related parsing in the code. And this is implemented in C.

    [Aside: Due to this I would strongly caution against feeding "untrusted" input to espeak-ng in its current state but unfortunately that's what most people who rely on espeak-ng for accessibility purposes inevitably do while browsing the web.]

    [TL;DR: More detail/repros/observations on espeak-ng issues here:

    * https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

    * https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

    * https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

    ]

    Contributors to the project are not unaware of the issues with the code base (which are exacerbated by the difficulty of even tracing the execution flow in order to understand how the library operates) nor that it would benefit from a significant refactoring effort.

    However as is typical with such projects which greatly benefit individual humans but don't offer an opportunity to generate significant corporate financial return, a lack of developers with sufficient skill/knowledge/time to devote to a significant refactoring means a "quick workaround" for an specific individual issue is often all that can be managed.

    This is often exacerbated by outdated/unclear/missing documentation.

    IMO there are two contribution approaches that could help the project moving forward while requiring the least amount of specialist knowledge/experience:

    * Improve visibility into the code by adding logging/tracing to make it easier to see why a particular code path gets taken.

    * Integrate an existing XML parser as a "pre-processor" to ensure that only valid/"sanitized"/cleaned-up XML is passed through to the SSML parsing code--this would increase robustness/safety and facilitate future removal of XML parsing-specific workarounds from the code base (leading to less tangled control flow) and potentially future removal/replacement of the entire bespoke XML parser.

    Of course, the project is not short on ideas/suggestions for how to improve the situation but, rather, direct developer contributions so... shrug

    In light of this, last year when I was developing the personal project[0] which made use of a dependency that in turn used espeak-ng I wanted to try to contribute something more tangible than just "ideas" so began to write-up & create reproductions for some of the issues I encountered while using espeak-ng and at least document the current behaviour/issues I encountered.

    Unfortunately while doing so I kept encountering new issues which would lead to the start of yet another round of debugging to try to understand what was happening in the new case.

    Perhaps inevitably this effort eventually stalled--due to a combination of available time, a need to attempt to prioritize income generation opportunities and the downsides of living with ADHD--before I was able to share the fruits of my research. (Unfortunately I seem to be way better at discovering & root-causing bugs than I am at writing up the results...)

    However I just now used the espeak-ng project being mentioned on HN as a catalyst to at least upload some of my notes/repros to a public repo (see links in TLDR section above) in that hopes that maybe they will be useful to someone who might have the time/inclination to make a more direct code contribution to the project. (Or, you know, prompt someone to offer to fund my further efforts in this area... :) )

    [0] A personal project to "port" my "Dialogue Tool for Larynx Text To Speech" project[1] to use the more recent Piper TTS[2] system which makes use of espeak-ng for transforming text to phonemes.

    [1] https://rancidbacon.itch.io/dialogue-tool-for-larynx-text-to... & https://gitlab.com/RancidBacon/larynx-dialogue/-/tree/featur...

    [2] https://github.com/rhasspy/piper

    [3] Very much no shade toward the project intended.

  • Home Assistant’s Year of the Voice – Chapter 2
    7 projects | news.ycombinator.com | 27 Apr 2023
    My interest in offline TTS is actually entirely unrelated to the automation space: I'm interested in Text to Speech for creative pursuits, such as video game voice dialogue and animated videos.

    This is one of the reasons why the range & quantity of available voices is particularly important to me.

    After all, you can't really have scene set in a board room with nine characters[3] if you've only got three voices to go around. :)

    I've actually been spending time this week on updating my "Dialogue Tool"[1] application (originally created to work with Larynx to help with narrative dialogue workflows such as voice "auditioning", intelligent caching & multiple voice recordings) to work with Piper.

    Which is where I ran into the question of how to navigate/curate a collection of more than 900+ voices.

    The main approaches I'm using so far are:

    (1) Random luck--just audition a bunch of different voices with your sample dialogue & see what you like.

    (2) Curation/sorting based on quality-related meta-data from the original dataset.

    (3) Generating a different dialogue line for each voice that includes their speaker number for identification purposes that also (hopefully) isn't tedious to listen to for 900+ voices. :)

    I haven't quite finished/uploaded results from (3) yet but example output based on approaches (3) & (2) can be heard here: https://rancidbacon.gitlab.io/piper-tts-demos/

    The recording has two sets of 10 voices which had the lowest Word Error Rate scores in the original dataset--which doesn't mean the resulting voice model is necessary good but is at least a starting point for exploring.

    I'd also like to explore more analysis-based approaches for grouping/curation (e.g. vocal characteristics such "softer", "lower", "older") but as I'm not getting paid for this[2], that's likely a longer term thing.

    A different approach which I've previously found really interesting is to use voices as a prompt for writing narrative dialogue. It really helps to hear the dialogue as you write it and the nuances of different voices can help spur ideas for where a conversation goes next...

    [1] See: https://rancidbacon.itch.io/dialogue-tool-for-larynx-text-to... & https://gitlab.com/RancidBacon/larynx-dialogue/-/tree/featur...

    [2] Am currently available/open to be though. :D

    [3] Will try to upload some example audio of this scene because I found it pretty funny. :)

larynx

Posts with mentions or reviews of larynx. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-27.
  • Home Assistant’s Year of the Voice – Chapter 2
    7 projects | news.ycombinator.com | 27 Apr 2023
    The most exciting thing about Home Assistant's "Year of the Voice", for me, is that it is apparently enabling/supporting @synesthesiam's continued phenomenal contributions to the FLOSS off-line voice synthesis space.

    The quality, variety & diversity of voices that synesthesiam's "Larynx" TTS project (https://github.com/rhasspy/larynx/) made available, completely transformed the Free/Open Source Text To Speech landscape.

    In addition "OpenTTS" (https://github.com/synesthesiam/opentts) provided a common API for interacting with multiple FLOSS TTS projects which showed great promise for actually enabling "standing on the shoulders of" rather than re-inventing the same basic functionality every time.

    The new "Piper" TTS project mentioned in the article is the apparent successor to Larynx and, along with the accompanying LibriTTS/LibriVox-based voice models, brings to FLOSS TTS something it's never had before:

    * Too many voices! :)

    Seriously, the current LibriTTS voice model version has 900+ voices (of varying quality levels), how do you even navigate that many?![0]

    And that's not even considering the even higher quality single speaker models based on other audio recording sources.

    Offline TTS while immensely valuable for individuals, doesn't seem to be attractive domain for most commercial entities due to lack of lock-in/telemetry opportunities so I was concerned that we might end up missing out on further valuable contributions from synesthesiam's specialised skills & experience due to financial realities & the human need for food. :)

    I'm glad we instead get to see what happens next.

    [0] See my follow-up comment about this.

  • Text to speech
    4 projects | /r/selfhosted | 21 Feb 2023
    Larynx!
  • Ask HN: Are there any good open source Text-to-Speech tools?
    15 projects | news.ycombinator.com | 1 Jan 2023
    I've had good results with https://github.com/rhasspy/larynx
  • Recommend a Text to Speech tool ?
    1 project | /r/RASPBERRY_PI_PROJECTS | 12 Nov 2022
    Larynx is a really good text-to-speech engine
  • Klipper on android
    1 project | /r/klippers | 18 Oct 2022
    I was able to install 3.7 following this guide. https://github.com/rhasspy/larynx/issues/9
  • I built an audio only Gemini client.
    2 projects | /r/geminiprotocol | 5 Jun 2022
  • NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality
    14 projects | news.ycombinator.com | 17 May 2022
    If you've not already encountered them I'd definitely encourage you to check out these Free/Open Source projects too:

    * Larynx: https://github.com/rhasspy/larynx/

    * OpenTTS: https://github.com/synesthesiam/opentts

    * Likely Mimic3 in the near future: https://mycroft.ai/blog/mimic-3-preview/

    Larynx in particular has a focus on "faster than real-time" while OpenTTS is an attempt to package & provide common REST API to all Free/Open Source Text To Speech systems so the FLOSS ecosystem can build on previous work supported by short-lived business interests, rather than start from scratch every time.

    AIUI the developer of the first two projects now works for Mycroft AI & is involved in the development of Mimic3 which seems very promising given how much of an impact on quality his solo work has had in just the past couple of years or so.

  • Need a recommendation: Self hosted speech to text service
    1 project | /r/selfhosted | 21 Mar 2022
    I haven't used it on it's own, but Larynx has worked well for me for Rhasspy
  • NATSpeech: High Quality Text-to-Speech Implementation with HuggingFace Demo
    4 projects | news.ycombinator.com | 16 Feb 2022
  • Question: Does anybody know of a working Text to Speech for python on pi?
    1 project | /r/raspberry_pi | 29 Jan 2022

What are some alternatives?

When comparing larynx-dialogue and larynx you can also consider the following projects:

tortoise-tts - A multi-voice TTS system trained with an emphasis on quality

TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production

RHVoice - a free and open source speech synthesizer for Russian and other languages

NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)

TTS - :robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)

rhasspy - Offline private voice assistant for many human languages

tacotron2 - Tacotron 2 - PyTorch implementation with faster-than-realtime inference

NATSpeech - A Non-Autoregressive Text-to-Speech (NAR-TTS) framework, including official PyTorch implementation of PortaSpeech (NeurIPS 2021) and DiffSpeech (AAAI 2022)

nerd-dictation - Simple, hackable offline speech to text - using the VOSK-API.

Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time

recasepunc - Model for recasing and repunctuating ASR transcripts

hmm_tts_build - a direct repository for building and using a "simple" tts