Kaldi Speech Recognition Toolkit
NeMo
Kaldi Speech Recognition Toolkit | NeMo | |
---|---|---|
22 | 29 | |
13,768 | 10,128 | |
1.1% | 3.1% | |
6.7 | 9.8 | |
11 days ago | 7 days ago | |
Shell | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Kaldi Speech Recognition Toolkit
-
Amazon plans to charge for Alexa in June–unless internal conflict delays revamp
Yeah, whisper is the closest thing we have, but even it requires more processing power than is present in most of these edge devices in order to feel smooth. I've started a voice interface project on a Raspberry Pi 4, and it takes about 3 seconds to produce a result. That's impressive, but not fast enough for Alexa.
From what I gather a Pi 5 can do it in 1.5 seconds, which is closer, so I suspect it's only a matter of time before we do have fully local STT running directly on speakers.
> Probably anathema to the space, but if the devices leaned into the ~five tasks people use them for (timers, weather, todo list?) could probably tighten up the AI models to be more accurate and/or resource efficient.
Yes, this is the approach taken by a lot of streaming STT systems, like Kaldi [0]. Rather than use a fully capable model, you train a specialized one that knows what kinds of things people are likely to say to it.
[0] http://kaldi-asr.org/
- Unsupervised (Semi-Supervised) ASR/STT training recipes
-
Steve's Explanation of the Viterbi Algorithm
You can study CTC in isolation, ignoring all the HMM background. That is how CTC was also originally introduced, by mostly ignoring any of the existing HMM literature. So e.g. look at the original CTC paper. But I think the distill.pub article (https://distill.pub/2017/ctc/) is also good.
For studying HMMs, any speech recognition lecture should cover that. We teach that at RWTH Aachen University but I don't think there are public recordings. But probably you should find some other lectures online somewhere.
You also find a lot of tutorials for Kaldi: https://kaldi-asr.org/
Maybe check this book: https://www.microsoft.com/en-us/research/publication/automat...
The relation of CTC and HMM becomes intuitively clear once you get the concept of HMMs. Often in terms of speech recognition, it is all formulated as finite state automata (FSA) (or finite state transducer (FST), or weighted FST (WFST)), and the CTC FST just looks a bit different (simpler) than the traditional HMMs, but in all cases, you can think about having states with possible transitions.
This is all mostly about the modeling. The training is more different. For CTC, you often calculate the log prob of the full sequence over all possible alignments directly, while for HMMs, people often use a fixed alignment, and calculate framewise cross entropy.
I did some research on the relation of CTC training and HMM training: https://www-i6.informatik.rwth-aachen.de/publications/downlo...
-
[D] What's stopping you from working on speech and voice?
- https://github.com/kaldi-asr/kaldi
-
C++ for machine learning
Additionally, C++ may be used for extremely high levels of optimization even for cloud-based ML. Dlib and Kaldi are C++ libraries used as dependencies in Python codebases for computer vision and audio processing, for example. So if your application requires you to customize any functions similar to those libraries, then you'll need C++ knowhow.
-
The Advantages and disadvantages of In-House Speech Acknowledgment
Frameworks as well as toolkits like Kaldi were at first promoted by the research study area, yet nowadays used by both scientists and also market experts, reduced the access obstacle in the advancement of automatic speech recognition systems. Nonetheless, cutting edge methods need big speech data readies to achieve a usable system.
-
xbp-src to only cross compile 32-bit
Hello. I'm trying to package the openfst library (here)[https://github.com/void-linux/void-packages/pull/39015] but a developer says 32-bit must be cross compiled from 64-bit. I see xbps-src has a nocross option, but I don't see a way to only cross compile. What do you think I should do? I have currently limited the archs to 64-bit ones. Here's my issue with the developer's response: https://github.com/kaldi-asr/kaldi/issues/4808 Thank you.
-
Machine Learning with Unix Pipes
If you interested in unix-like software design and not yet familiar with kaldi toolkit, you definitely need to check it https://kaldi-asr.org
It extended Unix design with archives, control lists and matrices and enabled really flexible unix-like processing. For example, recognition of a dataset looks like this:
extract-wav scp:list.scp ark:- | compute-mfcc-feats ark:- ark:- | lattice-decoder-faster final.mdl HCLG.fst ark:- ark:- | lattice-rescore ark:- ark:'|gzip -c > lat.gzip'
Another example is gstreamer command line.
-
Lexicap: Lex Fridman Podcast Whisper Captions by Andrej Karpathy
No, speaker diarization is not part of Whisper. There are open source projects - such as Kaldi [1], but it's hard to get them running if you are not an area expert.
[1] https://kaldi-asr.org/
-
Is there a way to integrate a raspberry pi with a keyboard to do speech to text?
State-of-the-art ASR, like what you get on smartphones, has unfortunately high resource requirements. Some recent smartphone models are able to run ASR on-device, but more typically, ASR is done by sending audio to a web service. Check out the (currently experimental) Web SpeechRecognition API in a Chrome browser. Here is a demo of the API in action. For something open source, check out Kaldi ASR.
NeMo
-
[P] Making a TTS voice, HK-47 from Kotor using Tortoise (Ideally WaveRNN)
I don't test WaveRNN but from the ones that I know the best that is open source is FastPitch. And it's easy to use, here is the tutorial for voice cloning.
- [N] Huggingface/nvidia release open source GPT-2B trained on 1.1T tokens
- [D] What is the best open source text to speech model?
-
[D] JAX vs PyTorch in 2023
Nowadays... bigger repos like https://github.com/NVIDIA/NeMo are all pytorch, lots of work also published by Meta and Microsoft is all torch. I check new work on GitHub all the time and I haven't seen a Tensorflow repo in years except one.
-
[D] What's stopping you from working on speech and voice?
- https://github.com/NVIDIA/NeMo
-
Can I use PyTorch to build a fast capitalization recoverer?
Can’t you use the NeMo model and just strip the punctuation from the output again if you don’t want it? You can also fine tune the the model with capitalization only if you look at the examples https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/Punctuation_and_Capitalization.ipynb The capitalization and punctuation are annotated separately (U indicates that the word should be upper cased, and O - no capitalization ). The model seems to be a token level classifier not seq to seq so there should also be a way to get just the capitalization part but you would have to look into the model as it’s not shown in the examples.
-
I made a free transcription service powered by Whisper AI
I think there's been talk to do speaker diarization with whisper-asr-webservice[0] which is also written in python and should be able to make use of goodies such as pyannote-audio, py-webrtcvad, etc.
Whisper is great but at the point we get to kludging various things together it starts to make more sense to use something like Nvidia NeMo[1] which was built with all of this in mind and more
[0] - https://github.com/ahmetoner/whisper-asr-webservice
[1] - https://github.com/NVIDIA/NeMo
-
Mozilla Common Voice - Korean Language is live - Help Build a Korean Corpus for Training AI/Navi/etc
[커먼보이스 전자우편](mailto:[email protected]) || Common Voice || Korean Language Homepage || FAQs || Speaking Aloud and Reviewing Recordings || Sentence Collector || NVidia/NeMo
- Whisper – open source speech recognition by OpenAI
-
Using Edge Biometrics For Better AI Security System Development
The final security grain was added with speech-to-text anti-spoofing built on QuartzNet from the Nemo framework. This model provides a decent quality user experience and is suitable for real-time scenarios. To measure how close what the person says to what the system expects, requires calculation of the Levenshtein distance between them.
What are some alternatives?
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
pyannote-audio - Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
DeepSpeech - DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
speech-and-text-unity-ios-android - Speed to text in Unity iOS use Native Speech Recognition
espnet - End-to-End Speech Processing Toolkit
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
rhasspy - Offline private voice assistant for many human languages
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production