notes_public
floss-various-contribs
notes_public | floss-various-contribs | |
---|---|---|
3 | 2 | |
- | - | |
- | - | |
- | - | |
- | - | |
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
notes_public
-
ESpeak-ng: speech synthesizer with more than one hundred languages and accents
Yeah, it would be nice if the financial backing behind Rhasspy/Piper led to improvements in espeak-ng too but based on my own development-related experience with the espeak-ng code base (related elsewhere in the thread) I suspect it would be significantly easier to extract the specific required text to phonemes functionality or (to a certain degree) reimplement it (or use a different project as a base[3]) than to more closely/fully integrate changes with espeak-ng itself[4]. :/
It seems Piper currently abstracts its phonemize-related functionality with a library[0] that currently makes use of a espeak-ng fork[1].
Unfortunately it also seems license-related issues may have an impact[2] on whether Piper continues to make use of espeak-ng.
For your specific example of handling 1984 as a year, my understanding is that espeak-ng can handle situations like that via parameters/configuration but in my experience there can be unexpected interactions between different configuration/API options[6].
[0] https://github.com/rhasspy/piper-phonemize
[1] https://github.com/rhasspy/espeak-ng
[2] https://github.com/rhasspy/piper-phonemize/issues/30#issueco...
[3] Previously I've made note of some potential options here: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...
[4] For example, as I note here[5] there's currently at least four different ways to access espeak-ng's phoneme-related functionality--and it seems that they all differ in their output, sometimes consistently and other times dependent on configuration (e.g. audio output mode, spoken punctuation) and probably also input. :/
[5] https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...
[6] For example, see my test cases for some other numeric-related configuration options here: https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...
-
The Case for Nushell
I also discovered an existing discussion[1] related to this topic which includes a link[2] to a "helper to call nushell nuon/json/yaml commands from bash/fish/zsh" and a comment[3] that the current nushell dev focus is "on getting the experience inside nushell right and [we] probably won't be able to dedicate design time to get the interface of native Nu commands with an outside POSIX shell right and stable.".
[0] https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...
[1] "Expose some commands to external world #6554": https://github.com/nushell/nushell/issues/6554
[2] https://github.com/cruel-intentions/devshell-files/blob/mast...
[3] https://github.com/nushell/nushell/issues/6554#issuecomment-...
floss-various-contribs
-
ESpeak-ng: speech synthesizer with more than one hundred languages and accents
Yeah, it would be nice if the financial backing behind Rhasspy/Piper led to improvements in espeak-ng too but based on my own development-related experience with the espeak-ng code base (related elsewhere in the thread) I suspect it would be significantly easier to extract the specific required text to phonemes functionality or (to a certain degree) reimplement it (or use a different project as a base[3]) than to more closely/fully integrate changes with espeak-ng itself[4]. :/
It seems Piper currently abstracts its phonemize-related functionality with a library[0] that currently makes use of a espeak-ng fork[1].
Unfortunately it also seems license-related issues may have an impact[2] on whether Piper continues to make use of espeak-ng.
For your specific example of handling 1984 as a year, my understanding is that espeak-ng can handle situations like that via parameters/configuration but in my experience there can be unexpected interactions between different configuration/API options[6].
[0] https://github.com/rhasspy/piper-phonemize
[1] https://github.com/rhasspy/espeak-ng
[2] https://github.com/rhasspy/piper-phonemize/issues/30#issueco...
[3] Previously I've made note of some potential options here: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...
[4] For example, as I note here[5] there's currently at least four different ways to access espeak-ng's phoneme-related functionality--and it seems that they all differ in their output, sometimes consistently and other times dependent on configuration (e.g. audio output mode, spoken punctuation) and probably also input. :/
[5] https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...
[6] For example, see my test cases for some other numeric-related configuration options here: https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...
What are some alternatives?
piper - A fast, local neural text to speech system