-
selfservicekiosk-audio-streaming
A best practice for streaming audio from a browser microphone to Dialogflow or Google Cloud STT by using websockets.
-
SurveyJS
Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App. With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
After the wake word is detected, I want to send through websocket to my server an audio stream of what the user says next, so that it gets transcribed in real time. The sample code I have to do that audio streaming and transcription uses WebRTC: https://github.com/dialogflow/selfservicekiosk-audio-streaming/blob/master/examples/example5.html
Will I need both WebRTC and web-voice-processor/ for my purposes, i.e. detecting the wake word with porcupine and then streaming the audio through websocket afterwards? At least for my purposes, they seem to both be able to do what I'm trying to achieve - get a stream from the microphone. Am I missing something here?