STT
DNS-Challenge
Our great sponsors
STT | DNS-Challenge | |
---|---|---|
11 | 2 | |
2,103 | 954 | |
2.3% | 3.2% | |
0.6 | 3.3 | |
18 days ago | 4 months ago | |
C++ | Python | |
Mozilla Public License 2.0 | Creative Commons Attribution 4.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
STT
-
Rest in Peas: The Unrecognized Death of Speech Recognition (2010)
What has happened since then? I know Common Voice has come and gone https://en.wikipedia.org/wiki/Common_Voice https://github.com/coqui-ai/STT
And I've seen some neural approaches too
No idea where to look for comparisons though.
-
Numen - FOSS voice control for handsfree computing
I basically just used coqui stt https://github.com/coqui-ai/STT
-
Are there any OCR and Speech-to-Text services that are privacy friendly?
This speech-to-text works well: https://github.com/coqui-ai/STT. openai's "whisper" is probably better but I haven't tried it: https://towardsdatascience.com/transcribe-audio-files-with-openais-whisper-e973ae348aa7
-
Introducing Whisper
I use two SST to live-translate audio that I listen to so I can look back (in paragraph form) to see things that I or the youtube has previously said: https://github.com/coqui-ai/STT https://github.com/ratwithacompiler/OBS-captions-plugin
-
You can now tether any prod Vector to Wire's Open Source Escape Pod • thedroidyouarelookingfor
I did have to install Coqui STT and go-asticoqui manually before i was able to run Chipper.
-
I put together a tutorial and overview on how to use DeepSpeech to do Speech Recognition in Python
If anyone is looking for a maintained version of DeepSpeech, checkout Coqui's repositories for STT and TTS. Coqui is lead by the engineers that used to work on DeepSpeech at Mozilla.
-
CoquiTTS: 🐸💬 - Open Source Text-to-Speech framework.
Link: https://github.com/coqui-ai/STT
- Mozilla Common Voice Adds 16 New Languages and 4,600 New Hours of Speech
- Coqui, a startup providing open speech tech for everyone
DNS-Challenge
-
Mozilla Common Voice Adds 16 New Languages and 4,600 New Hours of Speech
Is anyone aware of classification (e.g. word prediction) datasets for low-resource and endangered languages?
If so, we would like to use it for the HEAR NeurIPS competition: https://github.com/microsoft/DNS-Challenge/tree/master/datas...
The challenge is restricted only to classification tasks, and sequence modeling like full ASR is unfortunately beyond the scope of the competition.
What are some alternatives?
DeepSpeech - DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
NeMo - NeMo: a framework for generative AI
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
TTS - :robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)
OBS-captions-plugin - Closed Captioning OBS plugin using Google Speech Recognition
flashlight - A C++ standalone library for machine learning
vakyansh-models - Open source speech to text models for Indic Languages
PaddleSpeech - Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
LocalSTT - Android Speech Recognition Service using Vosk/Kaldi and Mozilla DeepSpeech
common-voice - Common Voice is part of Mozilla's initiative to help teach machines how real people speak.