ovos-core
piper
ovos-core | piper | |
---|---|---|
13 | 40 | |
103 | 4,200 | |
1.9% | 16.5% | |
9.1 | 8.6 | |
3 days ago | 7 days ago | |
Python | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ovos-core
-
Is Mycroft still worth it?
Check out OVOS.
-
Home Assistant’s Year of the Voice – Chapter 2
I used to work for Mycroft, so I'm hoping to eventually create an image that's compatible with Home Assistant pipelines.
For now, though, you may want to check out OVOS: https://openvoiceos.com/
-
?largest capacity DDR3L (1.35v) SODIMM
Blast you're so right! I already ordered an M92, but when LLaMA takes off for voice assistants and I max out my RAM I shall have a think about that HP G2!! With one of those maybe I could have enough RAM to run openbsd and have several VMs running on it! Thank you :D
- I've been working on Serge, a self-hosted alternative to ChatGPT. It's dockerized, easy to setup and it runs the models 100% locally. No remote API needed.
-
Proof of existence
The upper NUC is running a manual install of ovos-core with sound output through the 22" touchscreen monitor and a kinect V1 as a microphone.
-
OVOS - Persona Initiative
Good news, everyone! As those of you who are following OpenVoiceOS GoFundMe campaign may already have spotted, we've surpassed our most recent fundraising target, and set a new one. Our current stretch goal is a doozy. We're looking to give the Assistant a personality! More specifically, a configurable personality, to make your Assistant that much more... yours.
- ovos-core 0.0.7 was just released!
-
OVOS migration with docker containers ...
However it did not work for me out-of-the-box. I had to create some custom Dockerfiles to make adjustments to some of the images before all of them would start up correctly. i got around the Rapidfuzz issue you ran I to using the fix described here: https://github.com/OpenVoiceOS/ovos-core/issues/267
-
I would like a voice assistant for Home Assistant. What are my options ?
OVOS : There's a Dockerfile in the core repo that you should be able to use to make a Docker image for headless OVOS-Mycroft. You'd probably still have to set up the personal backend for it - not sure as I haven't set it up yet. This is likely the voice assistant replacement route that I'm going with, when there is time to do it.
- OpenVoiceOS website
piper
-
Ask HN: Open-source, local Text-to-Speech (TTS) generators
Mozilla's browser tts is kind of not bad, just parse and buffer one sentence at a time and it does all right.
For the backend, I've experimented with piper, which has a lot of voices and accents, though it's tricky to buffer and sync long texts.
https://github.com/rhasspy/piper
-
ESpeak-ng: speech synthesizer with more than one hundred languages and accents
After some brief research it seems the issue you're seeing may be a known bug in at least some versions/release of espeak-ng.
Here's some potentially related links if you'd like to dig deeper:
* "questions about mandarin data packet #1044": https://github.com/espeak-ng/espeak-ng/issues/1044
* "ESpeak NJ-1.51’s Mandarin pronunciation is corrupted #12952": https://github.com/nvaccess/nvda/issues/12952
* "The pronunciation of Mandarin Chinese using ESpeak NJ in NVDA is not normal #1028": https://github.com/espeak-ng/espeak-ng/issues/1028
* "When espeak-ng translates Chinese (cmn), IPA tone symbols are not output correctly #305": https://github.com/rhasspy/piper/issues/305
* "Please default ESpeak NG's voice role to 'Chinese (Mandarin, latin as Pinyin)' for Chinese to fix #12952 #13572": https://github.com/nvaccess/nvda/issues/13572
* "Cmn voice not correctly translated #1370": https://github.com/espeak-ng/espeak-ng/issues/1370
-
WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
If you're not already aware, the primary developer of Mimic 3 (and its non-Mimic predecessor Larynx) continued TTS-related development with Larynx and the renamed project Piper: https://github.com/rhasspy/piper
Last year Piper development was supported by Nabu Casa for their "Year of Voice" project for Home Assistant and it sounds like Mike Hansen is going to continue on it with their support this year.
-
Coqui.ai Is Shutting Down
Coqui-ai was a commercial continuation of Mozilla TTS and STT (https://github.com/mozilla/TTS).
At the time (2018-ish), it was really impressive for on-device voice synthesis (with a quality approaching the Google and Azure cloud-based voice synthesis options) and open source, so a lot of people in the FOSS community were hoping it could be used for a privacy-respecting home assistant, Linux speech synthesis that doesn't suck, etc.
After Mozilla abandoned the project, Coqui continued development and had some really impressive one-shot voice cloning, but pivoted to marketing speech synthesis for game developers. They were probably having trouble monetizing it, and it doesn't surprise me that they shut down.
An equivalent project that's still in active development and doing really well is Piper TTS (https://github.com/rhasspy/piper).
-
OpenVoice: Versatile Instant Voice Cloning
There isn't an ElevenLabs app like that, but I think that's the most expedient method, by far.
(details and warning: in-depth, opinionated take, written almost for my own benefit, I've done a lot of work near here recently but haven't had to organize my thoughts until now)
Why? Local inference is hard. You need two things: the clips to voice model (which we have here, but bleeding edge), and text + voice -> speech model.
Text to voice to speech, locally, has excellent prior art for me, in the form of a Raspberry Pi-based ONNX inference library called [Piper](https://github.com/rhasspy/piper). I should just be able to copy that, about an afternoon of work!
Except...when these models are trained, they encode plaintext to model input using a library called eSpeak. eSpeak is basically f(plaintext) => ints representing phonemes. eSpeak is a C library and written in a style I haven't seen in a while and depends on other C libraries. So I end up needing to port like 20K lines of C to Dart...or I could use WASM, but over the last year, I lost the ability to be able to reason through how to get WASM running in Dart, both native and web.
It's a really annoying technical problem: the speech models all use this eSpeak C library to turn plaintext => model input (tokenized phonemes).
Re: ElevenLabs
I had looked into the API months ago and vaguely remembered it was _very_ complete.
I spent the last hour or two playing with it, and reconfirmed that. They have enough API surface that you could build an API that took voice recordings, created a voice, and then did POSTs / socket connection to get audio data from that voice at will.
Only issue is pricing IMHO, $0.18 for 1000 characters. :/ But this is something I feel very comfortable saying wouldn't be _that_ much work to build and open source with a "bring your own API key" type thing. I had forgotten about Eleven Labs till your post, which made me realize there was an actually meaningful and quite moving use case for it.
-
Hello guys, any selfhosted alternative to eleven labs?
piper (https://github.com/rhasspy/piper)
-
[D] What offline TTS Model is good enough for a realistic real-time task?
I have been using piper-tts and it is GREAT and super lightweight / easy to use. On a 2080 I'm sure you can use the HQ models no worries!
-
Easy implement TTS libary for cpp
So i found some library and one which is from github and have read.me or good documentation called piper (https://github.com/rhasspy/piper) so apparently this library is for rasbery pi and yes there is TXT function and i need to modify again to make it more simple but my simple project don't need this kind of big complex libary and all i need is what i said before just a function that can output sound from computer using c++ libary.
-
Piper-whistle – Tool for piper TTS voice model management
piper-whistle is a tool to manage voices used with the piper (https://github.com/rhasspy/piper) speech synthesizer. Main motivation was to download and reference models in a structured way. You may browse the docs online at https://think-biq.gitlab.io/piper-whistle/
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
You may want to try Piper for this case (RPi 4): https://github.com/rhasspy/piper
What are some alternatives?
docker-mycroft - Mycroft AI Voice Assistant Docker images and docker-compose.yml files for x86_64, arm7vl and aarch64 CPU architectures.
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality
mycroft-core - Mycroft Core, the Mycroft Artificial Intelligence platform.
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
Home Assistant - :house_with_garden: Open source home automation that puts local control and privacy first.
espeak-ng - eSpeak NG is an open source speech synthesizer that supports more than hundred languages and accents.
coral-pi-rest-server - Perform inferencing of tensorflow-lite models on an RPi with acceleration from Coral USB stick
silero-models - Silero Models: pre-trained speech-to-text, text-to-speech and text-enhancement models made embarrassingly simple
ovos-solver-plugin-llmcpp
mimic3 - A fast local neural text to speech engine for Mycroft
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
willow - Open source, local, and self-hosted Amazon Echo/Google Home competitive Voice Assistant alternative