py-webrtcvad
openai-whisper-realtime
py-webrtcvad | openai-whisper-realtime | |
---|---|---|
2 | 1 | |
1,894 | 180 | |
- | - | |
0.0 | 10.0 | |
12 months ago | over 1 year ago | |
C | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
py-webrtcvad
-
Whisper – open source speech recognition by OpenAI
Haven’t tried it yet but love the concept!
Have you thought of using VAD (voice activity detection) for breaks? Back in my day (a long time ago) the webrtc VAD stuff was considered decent:
https://github.com/wiseman/py-webrtcvad
Model isn’t optimized for this use but I like where you’re headed!
-
Ask HN: I want to get started with Speech-to-Text. Where do I begin?
As part of ETL or just basic understanding about the how the speech data is handled try this tool : https://github.com/wiseman/py-webrtcvad
It is a python wrapper for a library for voice activity detection. It acts as a starting point while working on speech recognition problems. Helped me understand and discover a lot of concepts related to audio signal and data when I was in your shoes.
openai-whisper-realtime
-
Whisper – open source speech recognition by OpenAI
I tried running it in realtime with live audio input (kind of).
You can find the python script in this repo: https://github.com/tobiashuttinger/openai-whisper-realtime
What are some alternatives?
plaidml - PlaidML is a framework for making deep learning work everywhere.
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
trashbot - Trashbot helper AI assistant
DeepSpeech-examples - Examples of how to use or integrate DeepSpeech
mycroft-core - Mycroft Core, the Mycroft Artificial Intelligence platform.
wer_are_we - Attempt at tracking states of the arts and recent results (bibliography) on speech recognition.
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
dragonfly - Speech recognition framework allowing powerful Python-based scripting and extension of Dragon NaturallySpeaking (DNS), Windows Speech Recognition (WSR), Kaldi and CMU Pocket Sphinx