Spoken-Keyword-Spotting
pocketsphinx
Our great sponsors
Spoken-Keyword-Spotting | pocketsphinx | |
---|---|---|
1 | 6 | |
80 | 3,725 | |
- | 1.4% | |
0.0 | 7.4 | |
over 1 year ago | about 1 month ago | |
Python | C | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Spoken-Keyword-Spotting
-
How to train large deep learning models as a startup
The search term you're looking for is "Keyword Spotting" - and that's what's implemented locally for ~embedded devices that sit and wait for something relevant to come along so that they know when to start sending data up to the mothership (or even turn on additional higher-power cores locally).
Here's an example repo that might be interesting (from initial impressions, though there are many more out there) : https://github.com/vineeths96/Spoken-Keyword-Spotting
pocketsphinx
- [Discussion] Looking for an Open-Source Speech to Text model (english) that captures filler words, pauses and also records timestamps for each word.
-
I Created A Web Speech API NPM Package Called SpeechKit
There are espeak-ng https://github.com/espeak-ng/espeak-ng and pocketsphinx https://github.com/cmusphinx/pocketsphinx which can be used locally without making external requests.
-
"Why not just transcribe the audio?" I thought
And so I installed PocketSphinx, "one of Carnegie Mellon University's open source large vocabulary, speaker-independent continuous speech recognition engines."
-
How to train large deep learning models as a startup
- https://github.com/cmusphinx/pocketsphinx
This avoids having to stream audio 24x7 to a cloud model which would be super expensive. This being said, I'm pretty sure what the Alexa does, for example, is send any positive wake word to a cloud model (that is bigger and more accurate) to verify the prediction of the local wake word detection model AFAIK.
- Speech recognition library for financial markets
-
Speech recognition
PocketSphinx is generally regarded among voice assistant communities as a less reliable, but straight OOTB, alternative to a robust listener. It's a good solution when you want multiple hotwords (or just aren't in a position to train even one word.)
What are some alternatives?
spokestack-python - Spokestack is a library that allows a user to easily incorporate a voice interface into any Python application with a focus on embedded systems.
vosk - VOSK Speech Recognition Toolkit
svm-pytorch - Linear SVM with PyTorch
snowboy - Future versions with model training module will be maintained through a forked version here: https://github.com/seasalt-ai/snowboy
determined - Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
localcroft - Bits for locally-served Mycroft instances
C_to_Python_translator - Using File I/O we were able to convert C code written in one text file to Python code in another text file with the application of multiple function that could identify and accordingly process specific key words and formats used in the C language.
xla - Enabling PyTorch on XLA Devices (e.g. Google TPU)
speech-kit - Simplifying the Speech Synthesis and Speech Recognition engines for Javascript. Listen for commands and perform callback actions, make the browser speak and transcribe your speech!
GoogleNetworkSpeechSynthesis - Google's Network Speech Synthesis: Bring your own Google API key and proxy