wer_are_we
py-webrtcvad
wer_are_we | py-webrtcvad | |
---|---|---|
4 | 2 | |
1,862 | 1,894 | |
- | - | |
1.8 | 0.0 | |
almost 2 years ago | 12 months ago | |
C | ||
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wer_are_we
-
Lichess Voice Recognition Beta is now Live!
https://github.com/syhw/wer_are_we https://github.com/Franck-Dernoncourt/ASR_benchmark#benchmark-results
-
OpenAI Whisper Model Comparison
Great breakdown… with some interesting results and a ton of effort.
Are there any open benchmarks like this for all models that are actually runnable like the data exposed in https://github.com/syhw/wer_are_we but with some of your additional metrics?
-
Whisper – open source speech recognition by OpenAI
The authors do explicitly state that they're trying to do a lot of fancy new stuff here, like be multilingual, rather than pursuing just accuracy.
[1] https://github.com/syhw/wer_are_we
- This sub is NOT bullying you
py-webrtcvad
-
Whisper – open source speech recognition by OpenAI
Haven’t tried it yet but love the concept!
Have you thought of using VAD (voice activity detection) for breaks? Back in my day (a long time ago) the webrtc VAD stuff was considered decent:
https://github.com/wiseman/py-webrtcvad
Model isn’t optimized for this use but I like where you’re headed!
-
Ask HN: I want to get started with Speech-to-Text. Where do I begin?
As part of ETL or just basic understanding about the how the speech data is handled try this tool : https://github.com/wiseman/py-webrtcvad
It is a python wrapper for a library for voice activity detection. It acts as a starting point while working on speech recognition problems. Helped me understand and discover a lot of concepts related to audio signal and data when I was in your shoes.
What are some alternatives?
plaidml - PlaidML is a framework for making deep learning work everywhere.
openai-whisper-realtime - A quick experiment to achieve almost realtime transcription using Whisper.
trashbot - Trashbot helper AI assistant
DeepSpeech-examples - Examples of how to use or integrate DeepSpeech
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
stable-diffusion - A latent text-to-image diffusion model
dragonfly - Speech recognition framework allowing powerful Python-based scripting and extension of Dragon NaturallySpeaking (DNS), Windows Speech Recognition (WSR), Kaldi and CMU Pocket Sphinx