speech-and-text-unity-ios-android
Kaldi Speech Recognition Toolkit
speech-and-text-unity-ios-android | Kaldi Speech Recognition Toolkit | |
---|---|---|
1 | 24 | |
304 | 14,796 | |
0.0% | 1.4% | |
0.0 | 7.0 | |
about 1 year ago | 3 months ago | |
C# | Shell | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
speech-and-text-unity-ios-android
-
Are there free AI text to speech resources?
PingAK9/Speech-And-Text-Unity-iOS-Android: Speed to text in Unity iOS use Native Speech Recognition (github.com)
Kaldi Speech Recognition Toolkit
-
Cloud Solutions vs. On-Premise Speech Recognition Systems
Use of Open-Source Solutions and Customizable Models. On-premise systems, such as Lingvanex and Kaldi, provide tools to develop speech recognition models from scratch or based on open-source libraries. Unlike cloud services, where developers are limited to pre-built models, on-premise solutions allow you to create a system that fully matches the specifics of the task. For example, models can be trained on specific datasets, including professional vocabulary, dialects, or phrases typical to certain fields (e.g., healthcare or law).
-
Tour of Hell
Let me introduce to you Kaldi, a speech-to-text engine. I put the link right into models directory, to save you some time.
https://github.com/kaldi-asr/kaldi/tree/master/egs
There are a bunch of shell, Perl and Python scripts there, with some awk in between. These scripts are often copied almost verbatim between models and this, believe me, can lead to all sorts of errors.
The running joke around working with these scripts was "all these .sh should be .hs," i.e., these scripts should be implemented in Haskell.
-
Amazon plans to charge for Alexa in June–unless internal conflict delays revamp
Yeah, whisper is the closest thing we have, but even it requires more processing power than is present in most of these edge devices in order to feel smooth. I've started a voice interface project on a Raspberry Pi 4, and it takes about 3 seconds to produce a result. That's impressive, but not fast enough for Alexa.
From what I gather a Pi 5 can do it in 1.5 seconds, which is closer, so I suspect it's only a matter of time before we do have fully local STT running directly on speakers.
> Probably anathema to the space, but if the devices leaned into the ~five tasks people use them for (timers, weather, todo list?) could probably tighten up the AI models to be more accurate and/or resource efficient.
Yes, this is the approach taken by a lot of streaming STT systems, like Kaldi [0]. Rather than use a fully capable model, you train a specialized one that knows what kinds of things people are likely to say to it.
[0] http://kaldi-asr.org/
- Unsupervised (Semi-Supervised) ASR/STT training recipes
-
Steve's Explanation of the Viterbi Algorithm
You can study CTC in isolation, ignoring all the HMM background. That is how CTC was also originally introduced, by mostly ignoring any of the existing HMM literature. So e.g. look at the original CTC paper. But I think the distill.pub article (https://distill.pub/2017/ctc/) is also good.
For studying HMMs, any speech recognition lecture should cover that. We teach that at RWTH Aachen University but I don't think there are public recordings. But probably you should find some other lectures online somewhere.
You also find a lot of tutorials for Kaldi: https://kaldi-asr.org/
Maybe check this book: https://www.microsoft.com/en-us/research/publication/automat...
The relation of CTC and HMM becomes intuitively clear once you get the concept of HMMs. Often in terms of speech recognition, it is all formulated as finite state automata (FSA) (or finite state transducer (FST), or weighted FST (WFST)), and the CTC FST just looks a bit different (simpler) than the traditional HMMs, but in all cases, you can think about having states with possible transitions.
This is all mostly about the modeling. The training is more different. For CTC, you often calculate the log prob of the full sequence over all possible alignments directly, while for HMMs, people often use a fixed alignment, and calculate framewise cross entropy.
I did some research on the relation of CTC training and HMM training: https://www-i6.informatik.rwth-aachen.de/publications/downlo...
-
[D] What's stopping you from working on speech and voice?
- https://github.com/kaldi-asr/kaldi
-
C++ for machine learning
Additionally, C++ may be used for extremely high levels of optimization even for cloud-based ML. Dlib and Kaldi are C++ libraries used as dependencies in Python codebases for computer vision and audio processing, for example. So if your application requires you to customize any functions similar to those libraries, then you'll need C++ knowhow.
-
The Advantages and disadvantages of In-House Speech Acknowledgment
Frameworks as well as toolkits like Kaldi were at first promoted by the research study area, yet nowadays used by both scientists and also market experts, reduced the access obstacle in the advancement of automatic speech recognition systems. Nonetheless, cutting edge methods need big speech data readies to achieve a usable system.
-
xbp-src to only cross compile 32-bit
Hello. I'm trying to package the openfst library (here)[https://github.com/void-linux/void-packages/pull/39015] but a developer says 32-bit must be cross compiled from 64-bit. I see xbps-src has a nocross option, but I don't see a way to only cross compile. What do you think I should do? I have currently limited the archs to 64-bit ones. Here's my issue with the developer's response: https://github.com/kaldi-asr/kaldi/issues/4808 Thank you.
-
Machine Learning with Unix Pipes
If you interested in unix-like software design and not yet familiar with kaldi toolkit, you definitely need to check it https://kaldi-asr.org
It extended Unix design with archives, control lists and matrices and enabled really flexible unix-like processing. For example, recognition of a dataset looks like this:
extract-wav scp:list.scp ark:- | compute-mfcc-feats ark:- ark:- | lattice-decoder-faster final.mdl HCLG.fst ark:- ark:- | lattice-rescore ark:- ark:'|gzip -c > lat.gzip'
Another example is gstreamer command line.
What are some alternatives?
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
annyang - 💬 Speech recognition for your site
pyannote-audio - Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
unity-jar-resolver - Unity plugin which resolves Android & iOS dependencies and performs version management
DeepSpeech - DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.