rhino
larynx
Our great sponsors
rhino | larynx | |
---|---|---|
5 | 18 | |
593 | 788 | |
1.9% | - | |
8.8 | 0.0 | |
11 days ago | 10 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rhino
-
Speech Recognition in Unity: Adding Voice Input
Download pre-trained models: "Porcupine" from Porcupine Wake Word and Video Player Context from Rhino Speech-to-Intent repositories - You can also train a custom models on Picovoice Console.
-
Speech Recognition with SwiftUI
In order to initialize the voice AI, we’ll need both Porcupine (.ppn) and Rhino (.rhn) model files. Picovoice has made several pre-trained Porcupine and pre-trained Rhino models available on the Picovoice GitHub repositories. For this Barista app, we’re going to use the trigger phrase Hey Barista and the Coffee Maker context.
-
Cross-Browser Voice Commands with React
Get an AccessKey for free from Picovoice Console. You will need it as part of the init function. Also, get the English Parameter for Rhino from GitHub and save it to the public directory. Rhino uses this file as the basis to understand English context (other languages are also supported).
-
Ask HN: Private Alternatives to Alexa?
The only viable option that I found that could reliably infer commands from speech is https://github.com/Picovoice/rhino
Unfortunately it is not open source (the GitHub just has binary blobs) and requires an account to log in to generate and download model files, but the accuracy is great and you can use it to send commands to Home Assistant to turn lights on/off etc.
-
Any self hosted Alexa's or similar?
https://github.com/Picovoice/rhino/blob/master/LICENSE sayt it's Apache 2 license
larynx
-
Home Assistant’s Year of the Voice – Chapter 2
The most exciting thing about Home Assistant's "Year of the Voice", for me, is that it is apparently enabling/supporting @synesthesiam's continued phenomenal contributions to the FLOSS off-line voice synthesis space.
The quality, variety & diversity of voices that synesthesiam's "Larynx" TTS project (https://github.com/rhasspy/larynx/) made available, completely transformed the Free/Open Source Text To Speech landscape.
In addition "OpenTTS" (https://github.com/synesthesiam/opentts) provided a common API for interacting with multiple FLOSS TTS projects which showed great promise for actually enabling "standing on the shoulders of" rather than re-inventing the same basic functionality every time.
The new "Piper" TTS project mentioned in the article is the apparent successor to Larynx and, along with the accompanying LibriTTS/LibriVox-based voice models, brings to FLOSS TTS something it's never had before:
* Too many voices! :)
Seriously, the current LibriTTS voice model version has 900+ voices (of varying quality levels), how do you even navigate that many?![0]
And that's not even considering the even higher quality single speaker models based on other audio recording sources.
Offline TTS while immensely valuable for individuals, doesn't seem to be attractive domain for most commercial entities due to lack of lock-in/telemetry opportunities so I was concerned that we might end up missing out on further valuable contributions from synesthesiam's specialised skills & experience due to financial realities & the human need for food. :)
I'm glad we instead get to see what happens next.
[0] See my follow-up comment about this.
-
Text to speech
Larynx!
-
Ask HN: Are there any good open source Text-to-Speech tools?
I've had good results with https://github.com/rhasspy/larynx
-
Recommend a Text to Speech tool ?
Larynx is a really good text-to-speech engine
-
Klipper on android
I was able to install 3.7 following this guide. https://github.com/rhasspy/larynx/issues/9
- I built an audio only Gemini client.
-
NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality
If you've not already encountered them I'd definitely encourage you to check out these Free/Open Source projects too:
* Larynx: https://github.com/rhasspy/larynx/
* OpenTTS: https://github.com/synesthesiam/opentts
* Likely Mimic3 in the near future: https://mycroft.ai/blog/mimic-3-preview/
Larynx in particular has a focus on "faster than real-time" while OpenTTS is an attempt to package & provide common REST API to all Free/Open Source Text To Speech systems so the FLOSS ecosystem can build on previous work supported by short-lived business interests, rather than start from scratch every time.
AIUI the developer of the first two projects now works for Mycroft AI & is involved in the development of Mimic3 which seems very promising given how much of an impact on quality his solo work has had in just the past couple of years or so.
-
Need a recommendation: Self hosted speech to text service
I haven't used it on it's own, but Larynx has worked well for me for Rhasspy
- NATSpeech: High Quality Text-to-Speech Implementation with HuggingFace Demo
- Question: Does anybody know of a working Text to Speech for python on pi?
What are some alternatives?
rhasspy - Offline private voice assistant for many human languages
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality
picovoice - On-device voice assistant platform powered by deep learning
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
Speech-Recognition - Speech Recognition library for adding Voice Commands and Controls to all your applications. Whether you are building web apps, native apps or desktop apps, this technology can be integrated into any system with an internet connection.
RHVoice - a free and open source speech synthesizer for Russian and other languages
Node RED - Low-code programming for event-driven applications
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
Porcupine - On-device wake word detection powered by deep learning
TTS - :robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node