audapolis
SpeechRecognition
audapolis | SpeechRecognition | |
---|---|---|
8 | 16 | |
639 | 8,071 | |
2.2% | - | |
6.7 | 8.7 | |
7 months ago | 11 days ago | |
TypeScript | Python | |
GNU Affero General Public License v3.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
audapolis
- Audapolis: An editor for spoken-word audio with automatic transcription
-
MacWhisper: Transcribe audio files on your Mac
Here's a multi-platform open source app that does the same thing but uses vosk instead of whisper.
https://github.com/bugbakery/audapolis
- Will Kden ever have Ai
-
Self-hosted audio transcription?
Audapolis is also an interesting option: https://github.com/audapolis/audapolis
- [Looking for] Ai audio denoise & transcript
- Audapolis – Edit audio and video by selecting text
SpeechRecognition
-
help with script (beginner)
Start and Stop Listening Example
-
MacWhisper: Transcribe audio files on your Mac
There is a great library that has support not only with OpenAIs whisper but many others that also work offline. https://github.com/Uberi/speech_recognition
-
Unpopular Opinion: a lot of Obsidian community make Obsidian sound like something cringey/productivity guru-y
This is the library: https://github.com/Uberi/speech_recognition
-
Nvim-VoiceRec : Add Speech-To-Text To Neovim! (useful for gpt)
It is python remote plugin that is a tin wrapper around speech_recognition package.
- Speech-to-text software
-
Voice commands in Doom Eternal possible?
I am less familiar with speech recognition myself. I have implemented something similar many years ago, back when Google had a REST API that allowed you to upload audio and they would respond with the recognized words/sentence. I think they still have the same API available, though. They limited how much you could send, but for voice commands it was pretty solid. However, SpeechRecognition looks like a library worth trying out for this, as that seems like it could do offline processing depending on the underlying library. They also have some examples to look at.
-
Build Simple CLI-Based Voice Assistant with PyAudio, Speech Recognition, pyttsx3 and SerpApi
SpeechRecognition
- Need help with speech recognition
-
Wiki for the podcast
I found this one here
-
How to use my speaker as input and my mic as output?
https://github.com/Uberi/speech_recognition/blob/master/reference/library-reference.rst this might help. I guess your best bet is to rtfm.
What are some alternatives?
vosk-server - WebSocket, gRPC and WebRTC speech recognition server based on Vosk and Kaldi libraries
pydub - Manipulate audio with a simple and easy high level interface
whisper-diarization - Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper
pyAudioAnalysis - Python Audio Analysis Library: Feature Extraction, Classification, Segmentation and Applications
LLMStack - No-code platform to build LLM Agents, workflows and applications with your data
allosaurus - Allosaurus is a pretrained universal phone recognizer for more than 2000 languages
buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
aeneas - aeneas is a Python/C library and a set of tools to automagically synchronize audio and text (aka forced alignment)
whisperer - On-demand prompt-aided voice-to-text with OpenAI's Whisper
speech-to-text-websockets-python
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
speechpy - :speech_balloon: SpeechPy - A Library for Speech Processing and Recognition: http://speechpy.readthedocs.io/en/latest/