wer_are_we
vosk-api
wer_are_we | vosk-api | |
---|---|---|
4 | 61 | |
1,862 | 7,109 | |
- | 2.6% | |
1.8 | 6.6 | |
almost 2 years ago | 11 days ago | |
Jupyter Notebook | ||
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wer_are_we
-
Lichess Voice Recognition Beta is now Live!
https://github.com/syhw/wer_are_we https://github.com/Franck-Dernoncourt/ASR_benchmark#benchmark-results
-
OpenAI Whisper Model Comparison
Great breakdown… with some interesting results and a ton of effort.
Are there any open benchmarks like this for all models that are actually runnable like the data exposed in https://github.com/syhw/wer_are_we but with some of your additional metrics?
-
Whisper – open source speech recognition by OpenAI
The authors do explicitly state that they're trying to do a lot of fancy new stuff here, like be multilingual, rather than pursuing just accuracy.
[1] https://github.com/syhw/wer_are_we
- This sub is NOT bullying you
vosk-api
- Infini-Gram: Scaling unbounded n-gram language models to a trillion tokens
- VOSK Offline Speech Recognition API
- Apollo dev posts backend code to Git to disprove Reddit’s claims of scrapping and inefficiency
- Working Vosk model?
-
Creating a live transcript bot using Vosk Ai
So I don't know if my issue comes from my lack of knowledge of discord.js/voice or VOSK. so I guess the most important thing I need to see is if I am creating a proper stream for the Vosk API to capture the audio. if I can figure out how to capture an audio stream I can probably import that in to vosk and figure out how to use vosk myself. but right now I can't even get close! Thank you in advance...Sorry if this isn't the right place for this
-
What are the aplications of rust in machine learning ?
I remember a while ago checking out the issues with Vosk speech recognition (written in C). A handful of it's issues are related to segfaults and null pointers.
-
Show HN: Willow – Open-Source Privacy-Focused Voice Assistant Hardware
first, good initiative! thanks for sharing. i think you gotta be more diligent and careful with the problem statement.
checking the weather in Sofia, Bulgaria requires cloud, current information. it's not "random speech". ESP SR capability issues don't mean that you cannot process it locally.
the comment was on "voice processing" i.e. sending speech to the cloud, not sending a call request to get the weather information.
besides, local intent detection, beyond 400 commands, there are great local STT options, working better than most cloud STTs for "random speech"
https://github.com/alphacep/vosk-api
-
ChatGPT API is now officially available, priced at $0.002 per 1k tokens
I did a one-off text to speech tool for someone last year and had pretty good results with VOSK. One upside is that it works offline, although I imagine if you use TTS a lot you'll notice issues I didn't.
-
Looking to mod a Vector with GPT-3, what are my options?
You can use vosk-api (https://github.com/alphacep/vosk-api) to listen to your audio, transform it to text, and then post the text to GPT-3, then using the vector sdk, have your responses said by vector.
-
A new voice assistant that looks promising
The set up script wants to download https://github.com/alphacep/vosk-api/releases/download/v0.3.45/vosk-model-en-v0.3.45.zip, but this resource is not found. AFAICT all releases never contained a model file. Remedy: hardcode one model from https://alphacephei.com/vosk/models. I guessed and picked the one with the closest name, vosk-model-en-us-0.22.zip, just so I could continue.
What are some alternatives?
plaidml - PlaidML is a framework for making deep learning work everywhere.
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
openai-whisper-realtime - A quick experiment to achieve almost realtime transcription using Whisper.
Kaldi Speech Recognition Toolkit - kaldi-asr/kaldi is the official location of the Kaldi project.
DeepSpeech-examples - Examples of how to use or integrate DeepSpeech
vosk-server - WebSocket, gRPC and WebRTC speech recognition server based on Vosk and Kaldi libraries
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
py-webrtcvad - Python interface to the WebRTC Voice Activity Detector
AutoSub - A CLI script to generate subtitle files (SRT/VTT/TXT) for any video using either DeepSpeech or Coqui
trashbot - Trashbot helper AI assistant
DeepSpeech - Install Mozilla DeepSpeech on a Raspberry Pi 4