Our great sponsors
-
DeepSpeech
DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
PaddleSpeech
Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.
-
STT
🐸STT - The deep learning toolkit for Speech-to-Text. Training and deploying STT models has never been so easy.
-
common-voice-android
Repository of "CV Project" app. It's an unofficial app for Mozilla Common Voice, which permits you to contribute to this project via your device.
-
DNS-Challenge
This repo contains the scripts, models, and required files for the Deep Noise Suppression (DNS) Challenge.
-
TTS
:robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts) (by mozilla)
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
NeMo
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
-
edgedict
Working online speech recognition based on RNN Transducer. ( Trained model release available in release )
-
common-voice
Common Voice is part of Mozilla's initiative to help teach machines how real people speak.
Mozilla announced Deep Speech[1] around the same time as Common Voice.
Mozilla Deep Speech is an open source speech recognition engine, based upon Baidu's Deep Speech research paper.
Unsurprisingly, Deep Speech requires a corpus such as... Common Voice.
[1] https://github.com/mozilla/DeepSpeech
[2] Baidu Deep Speech
I've had good results with https://github.com/flashlight/flashlight/blob/master/flashli.... Seems to work well with spoken english in a variety of accents. Biggest limitation is that the architecture they have pretrained models for doesn't really work well with clips longer than ~15 seconds, so you have to segment your input files.
Ah, damn. Didn't realise.
It also looks like Baidu are now developing their Deep Speech as open source? https://github.com/PaddlePaddle/DeepSpeech
The app is entirely open source and available on F-Droid:
https://github.com/Sav22999/common-voice-android
https://f-droid.org/packages/org.commonvoice.saverio/
Is anyone aware of classification (e.g. word prediction) datasets for low-resource and endangered languages?
If so, we would like to use it for the HEAR NeurIPS competition: https://github.com/microsoft/DNS-Challenge/tree/master/datas...
The challenge is restricted only to classification tasks, and sequence modeling like full ASR is unfortunately beyond the scope of the competition.
This may be off-topic but: What's the relationship between Coqui (an OSS TTS startup) https://coqui.ai/about and Mozilla? I recall that the project at one point was called mozilla/TTS (https://github.com/mozilla/TTS/) and now I see that has a fork in the startup's own repo (https://github.com/coqui-ai/TTS). Presumably Common Voice is used to train mozilla/TTS and other OSS TTS solutions?
This may be off-topic but: What's the relationship between Coqui (an OSS TTS startup) https://coqui.ai/about and Mozilla? I recall that the project at one point was called mozilla/TTS (https://github.com/mozilla/TTS/) and now I see that has a fork in the startup's own repo (https://github.com/coqui-ai/TTS). Presumably Common Voice is used to train mozilla/TTS and other OSS TTS solutions?
https://github.com/NVIDIA/NeMo which is open source, Pytorch based and regularly publishes new models and checkpoints.
I created edgedict[0] a year ago part of my side projects. At that time this is the only open source STT with streaming capabilities. If anyone is interested the pretrained weights for english and chinese is available.
[0] https://github.com/theblackcat102/edgedict
You arguably have something of substance to add - you can help improve the datasets by speaking or validating phrases in the project's website
https://commonvoice.mozilla.org/
There are many languages available to pick from.
Related posts
- Show HN: AI Dub Tool I Made to Watch Foreign Language Videos with My 7-Year-Old
- Ask HN: Offline, Embeddable Speech Recognition?
- Base64.ai – Extract text, data, photos and more from all types of docs
- Easy video transcription and subtitling with Whisper, FFmpeg, and Python
- Using Groq to Build a Real-Time Language Translation App