common-voice
allosaurus

common-voice | allosaurus | |
---|---|---|
67 | 2 | |
3,383 | 621 | |
0.3% | 0.0% | |
10.0 | 0.0 | |
1 day ago | about 1 year ago | |
TypeScript | Python | |
Mozilla Public License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
common-voice
-
A CC-By Open-Source TTS Model with Voice Cloning
Yeah there's no chance Mozilla would do anything like this:
https://commonvoice.mozilla.org/
-
OpenAI's Whisper is another case study in Colonisation
Mozillas Common Voice Project (https://commonvoice.mozilla.org/) is creating an open dataset for many minority languages to make it easier to support them in STT systems. If you speak one of these languages please consider donating a few minutes of your voice.
- Mozilla Launching a Public Voice Dataset
-
Common Voice
> it was not at all obvious to me there was some way of speeding up getting a language in the first place.
Yeah, that's the biggest failing of Common Voice in my opinion. Getting a new language up to speed could be much improved by simply adding a few links to documentation, but even the existing links are broken, which I reported in March 2022... https://github.com/common-voice/common-voice/issues/3637
> I have no interest in wasting time contributing to a UI translation I actively don't want to be subjected to
Translating the UI may still help you get other people to record, even if you don't want to use it yourself.
> I'll see if I can submit some sentences at least
If you want to go faster, there's also a project to extract sentences from Wikipedia etc. in small doses Mozilla's lawyers and Wikimedia's lawyers have agreed are fair use. I think you'd only need to define how Norwegian Bokmål separates sentences. (E.g. after a period but not if it's a common abbreviation like "etc." in the preceding sentence.)
- Practice speaking and listening of your target language on Common Voice
-
Web Speech API is (still) broken on Linux circa 2023
There is a lot of TTS and SST development going on (https://github.com/mozilla/TTS; https://github.com/mozilla/DeepSpeech; https://github.com/common-voice/common-voice). That is the only way they work: Contributions from the wild.
- How do I get audio data from from native speakers for Anki?
-
Web Speech API is not available in the Quest browser
Since you're interested in STT and TTS, let me just plug in Mozilla's Common Voice, a way for everyone to contribute to an open source data set for STT. You can record yourself or verify other people's recordings!
-
Mozilla Common Voice - Korean Language is live - Help Build a Korean Corpus for Training AI/Navi/etc
[커먼보이스 전자우편](mailto:[email protected]) || Common Voice || Korean Language Homepage || FAQs || Speaking Aloud and Reviewing Recordings || Sentence Collector || NVidia/NeMo
-
Ask HN: Open-source video transcribing software?
How can it be used for transcription?
In their website I only see an interface for either uploading audio or submitting transcriptions:
https://commonvoice.mozilla.org/es
The Github repo they mention (https://github.com/common-voice/common-voice) seems to be just that sample collection software. I do not see where I can download the software to transcribe audio.
allosaurus
-
Complete table of all IPA vowels' formant frequencies
Thank you for a great reply! If I catch your drift, how does this bode with phonemic transcription? Suppose we have an automatic phone recognizer tool such as Allosaurus.
-
Python and Speech recognition
And for phonemes recognition: - this looks like it could be useful (I'm sure you won't mind if it's "phones" instead of "phonemes"): https://github.com/xinjli/allosaurus - about using standard speech recognition tools: https://cmusphinx.github.io/wiki/phonemerecognition/
What are some alternatives?
vosk-server - WebSocket, gRPC and WebRTC speech recognition server based on Vosk and Kaldi libraries
SpeechRecognition - Speech recognition module for Python, supporting several engines and APIs, online and offline.
forced-alignment-tools - A collection of links and notes on forced alignment tools
wikipron - Massively multilingual pronunciation mining
DeepSpeech - DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
diffwave - DiffWave is a fast, high-quality neural vocoder and waveform synthesizer.
