AudioWorkletStream
pocketsphinx
AudioWorkletStream | pocketsphinx | |
---|---|---|
5 | 6 | |
25 | 3,745 | |
- | 0.9% | |
5.6 | 7.4 | |
3 months ago | about 2 months ago | |
HTML | C | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AudioWorkletStream
-
Node.js fetch() vs. Deno fetch(): Implementation details...
// Exits half way through reading response when --max-old-space-size=6 is set // Exits immediately when --jitless flag is set // // Usage: // // port.postMessage({ // url: 'https://github.com/guest271314/AudioWorkletStream/raw/master/house--64kbs-0-wav', // method: 'get', // body: null // })
-
Are you using generators?
Yes. Fetching a single or multiple files for an infinite stream of audio https://github.com/guest271314/AudioWorkletStream/blob/master/worker.js. Streaming (real-time) audio is non-trivial. Any gaps or glitches in playback will be audible to the user. We could test for expected Float32Arrays. I would suggest complentary manual test in, e.g., WPT to determine audio output does not have gaps or glitches; and renders the expected playback rate.
-
I Created A Web Speech API NPM Package Called SpeechKit
One way to do that is utilizing Native Messaging on Chromium or Firefox https://github.com/guest271314/native-messaging-espeak-ng, and https://github.com/guest271314/webtransport/blob/main/webTransportEspeakNg.js for some WebTransport experiments. Technically we don't need a local server. We can stream and parse the WAV directly and pipe to AudioWorklet or a MediaStreamTrackGenerator https://github.com/guest271314/AudioWorkletStream. The same is true for speech recognition, where audio is piped to the local application and text or JSON piped back. Note also espeak-ng has been compiled to WebAssembly. I created native-messaging-espeak-ng for the ability to pass SSML directly to espeak-ng.
-
How to stream/play a video or audio file on HTTP?
You can stream audio and/or video over HTTP using fetch() https://github.com/guest271314/AudioWorkletStream as long as you know how to parse the codec, if the media is encoded.
-
Is it possible to have an accurate timer in javascript
Re using a dedicated Worker and AudioWorklet to stream, see, e.g., https://github.com/guest271314/AudioWorkletStream; https://plnkr.co/edit/nECtUZ.
pocketsphinx
- [Discussion] Looking for an Open-Source Speech to Text model (english) that captures filler words, pauses and also records timestamps for each word.
-
I Created A Web Speech API NPM Package Called SpeechKit
There are espeak-ng https://github.com/espeak-ng/espeak-ng and pocketsphinx https://github.com/cmusphinx/pocketsphinx which can be used locally without making external requests.
-
"Why not just transcribe the audio?" I thought
And so I installed PocketSphinx, "one of Carnegie Mellon University's open source large vocabulary, speaker-independent continuous speech recognition engines."
-
How to train large deep learning models as a startup
- https://github.com/cmusphinx/pocketsphinx
This avoids having to stream audio 24x7 to a cloud model which would be super expensive. This being said, I'm pretty sure what the Alexa does, for example, is send any positive wake word to a cloud model (that is bigger and more accurate) to verify the prediction of the local wake word detection model AFAIK.
- Speech recognition library for financial markets
-
Speech recognition
PocketSphinx is generally regarded among voice assistant communities as a less reliable, but straight OOTB, alternative to a robust listener. It's a good solution when you want multiple hotwords (or just aren't in a position to train even one word.)
What are some alternatives?
streams - Streams Standard
vosk - VOSK Speech Recognition Toolkit
GoogleNetworkSpeechSynthesis - Google's Network Speech Synthesis: Bring your own Google API key and proxy
snowboy - Future versions with model training module will be maintained through a forked version here: https://github.com/seasalt-ai/snowboy
speech-kit - Simplifying the Speech Synthesis and Speech Recognition engines for Javascript. Listen for commands and perform callback actions, make the browser speak and transcribe your speech!
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
musical-timer - Timers based in musical parameters (time signature, tempo and beat resolution)
Spoken-Keyword-Spotting - In this repository, we explore using a hybrid system consisting of a Convolutional Neural Network and a Support Vector Machine for Keyword Spotting task.
native-messaging-espeak-ng - Native Messaging => eSpeak NG => MediaStreamTrack
localcroft - Bits for locally-served Mycroft instances
proposal-common-minimum-api
C_to_Python_translator - Using File I/O we were able to convert C code written in one text file to Python code in another text file with the application of multiple function that could identify and accordingly process specific key words and formats used in the C language.