Kaldi Speech Recognition Toolkit VS DeepSpeech

Compare Kaldi Speech Recognition Toolkit vs DeepSpeech and see what are their differences.

DeepSpeech

DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers. (by mozilla)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
Kaldi Speech Recognition Toolkit DeepSpeech
22 67
13,685 24,164
1.1% 1.0%
7.4 0.0
3 months ago 2 months ago
Shell C++
GNU General Public License v3.0 or later Mozilla Public License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Kaldi Speech Recognition Toolkit

Posts with mentions or reviews of Kaldi Speech Recognition Toolkit. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-11-03.
  • Amazon plans to charge for Alexa in June–unless internal conflict delays revamp
    1 project | news.ycombinator.com | 20 Jan 2024
    Yeah, whisper is the closest thing we have, but even it requires more processing power than is present in most of these edge devices in order to feel smooth. I've started a voice interface project on a Raspberry Pi 4, and it takes about 3 seconds to produce a result. That's impressive, but not fast enough for Alexa.

    From what I gather a Pi 5 can do it in 1.5 seconds, which is closer, so I suspect it's only a matter of time before we do have fully local STT running directly on speakers.

    > Probably anathema to the space, but if the devices leaned into the ~five tasks people use them for (timers, weather, todo list?) could probably tighten up the AI models to be more accurate and/or resource efficient.

    Yes, this is the approach taken by a lot of streaming STT systems, like Kaldi [0]. Rather than use a fully capable model, you train a specialized one that knows what kinds of things people are likely to say to it.

    [0] http://kaldi-asr.org/

  • Unsupervised (Semi-Supervised) ASR/STT training recipes
    2 projects | /r/deeplearning | 3 Nov 2023
  • Steve's Explanation of the Viterbi Algorithm
    1 project | news.ycombinator.com | 16 Oct 2023
    You can study CTC in isolation, ignoring all the HMM background. That is how CTC was also originally introduced, by mostly ignoring any of the existing HMM literature. So e.g. look at the original CTC paper. But I think the distill.pub article (https://distill.pub/2017/ctc/) is also good.

    For studying HMMs, any speech recognition lecture should cover that. We teach that at RWTH Aachen University but I don't think there are public recordings. But probably you should find some other lectures online somewhere.

    You also find a lot of tutorials for Kaldi: https://kaldi-asr.org/

    Maybe check this book: https://www.microsoft.com/en-us/research/publication/automat...

    The relation of CTC and HMM becomes intuitively clear once you get the concept of HMMs. Often in terms of speech recognition, it is all formulated as finite state automata (FSA) (or finite state transducer (FST), or weighted FST (WFST)), and the CTC FST just looks a bit different (simpler) than the traditional HMMs, but in all cases, you can think about having states with possible transitions.

    This is all mostly about the modeling. The training is more different. For CTC, you often calculate the log prob of the full sequence over all possible alignments directly, while for HMMs, people often use a fixed alignment, and calculate framewise cross entropy.

    I did some research on the relation of CTC training and HMM training: https://www-i6.informatik.rwth-aachen.de/publications/downlo...

  • [D] What's stopping you from working on speech and voice?
    7 projects | /r/MachineLearning | 30 Jan 2023
    - https://github.com/kaldi-asr/kaldi
  • C++ for machine learning
    2 projects | /r/cscareerquestions | 7 Jan 2023
    Additionally, C++ may be used for extremely high levels of optimization even for cloud-based ML. Dlib and Kaldi are C++ libraries used as dependencies in Python codebases for computer vision and audio processing, for example. So if your application requires you to customize any functions similar to those libraries, then you'll need C++ knowhow.
  • The Advantages and disadvantages of In-House Speech Acknowledgment
    1 project | /r/datatangblogbotshare | 12 Dec 2022
    Frameworks as well as toolkits like Kaldi were at first promoted by the research study area, yet nowadays used by both scientists and also market experts, reduced the access obstacle in the advancement of automatic speech recognition systems. Nonetheless, cutting edge methods need big speech data readies to achieve a usable system.
  • xbp-src to only cross compile 32-bit
    2 projects | /r/voidlinux | 21 Nov 2022
    Hello. I'm trying to package the openfst library (here)[https://github.com/void-linux/void-packages/pull/39015] but a developer says 32-bit must be cross compiled from 64-bit. I see xbps-src has a nocross option, but I don't see a way to only cross compile. What do you think I should do? I have currently limited the archs to 64-bit ones. Here's my issue with the developer's response: https://github.com/kaldi-asr/kaldi/issues/4808 Thank you.
  • Machine Learning with Unix Pipes
    1 project | news.ycombinator.com | 15 Nov 2022
    If you interested in unix-like software design and not yet familiar with kaldi toolkit, you definitely need to check it https://kaldi-asr.org

    It extended Unix design with archives, control lists and matrices and enabled really flexible unix-like processing. For example, recognition of a dataset looks like this:

    extract-wav scp:list.scp ark:- | compute-mfcc-feats ark:- ark:- | lattice-decoder-faster final.mdl HCLG.fst ark:- ark:- | lattice-rescore ark:- ark:'|gzip -c > lat.gzip'

    Another example is gstreamer command line.

  • Lexicap: Lex Fridman Podcast Whisper Captions by Andrej Karpathy
    1 project | news.ycombinator.com | 27 Sep 2022
    No, speaker diarization is not part of Whisper. There are open source projects - such as Kaldi [1], but it's hard to get them running if you are not an area expert.

    [1] https://kaldi-asr.org/

  • Is there a way to integrate a raspberry pi with a keyboard to do speech to text?
    2 projects | /r/ErgoMechKeyboards | 1 Sep 2022
    State-of-the-art ASR, like what you get on smartphones, has unfortunately high resource requirements. Some recent smartphone models are able to run ASR on-device, but more typically, ASR is done by sending audio to a web service. Check out the (currently experimental) Web SpeechRecognition API in a Chrome browser. Here is a demo of the API in action. For something open source, check out Kaldi ASR.

DeepSpeech

Posts with mentions or reviews of DeepSpeech. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-05.
  • Common Voice
    5 projects | news.ycombinator.com | 5 Dec 2023
  • Ask HN: Speech to text models, are they usable yet?
    2 projects | news.ycombinator.com | 22 Oct 2023
  • Looking to recreate a cool AI assistant project with free tools
    3 projects | /r/selfhosted | 2 Aug 2023
    - [DeepSpeech](https://github.com/mozilla/DeepSpeech) rather than Whisper for offline speech-to-text
    3 projects | /r/techsupport | 2 Aug 2023
    I came across a very interesting [project]( (4) Mckay Wrigley on Twitter: "My goal is to (hopefully!) add my house to the dataset over time so that I have an indoor assistant with knowledge of my surroundings. It’s basically just a slow process of building a good enough dataset. I hacked this together for 2 reasons: 1) It was fun, and I wanted to…" / X ) made by Mckay Wrigley and I was wondering what's the easiest way to implement it using free, open-source software. Here's what he used originally, followed by some open source candidates I'm considering but would love feedback and advice before starting: Original Tools: - YoloV8 does the heavy lifting with the object detection - OpenAI Whisper handles voice - GPT-4 handles the “AI” - Google Custom Search Engine handles web browsing - MacOS/iOS handles streaming the video from my iPhone to my Mac - Python for the rest Open Source Alternatives: - [ OpenCV](https://opencv.org/) instead of YoloV8 for computer vision and object detection - Replacing GPT-4 is still a challenge as I know there are some good open-source LLms like Llama 2, but I don't know how to apply this in the code perhaps in the form of api - [DeepSpeech](https://github.com/mozilla/DeepSpeech) rather than Whisper for offline speech-to-text - [Coqui TTS](https://github.com/coqui-ai/TTS) instead of Whisper for text-to-speech - Browser automation with [Selenium](https://www.selenium.dev/) instead of Google Custom Search - Stream video from phone via RTSP instead of iOS integration - Python for rest of code I'm new to working with tools like OpenCV, DeepSpeech, etc so would love any advice on the best way to replicate the original project in an open source way before I dive in. Are there any good guides or better resources out there? What are some pitfalls to avoid? Any help is much appreciated!
  • Speech-to-Text in Real Time
    1 project | news.ycombinator.com | 16 Jul 2023
  • Linux Mint XFCE
    1 project | /r/linuxbrasil | 29 Apr 2023
    algo assim? https://github.com/mozilla/DeepSpeech
  • Are there any secure and free auto transcription software ?
    2 projects | /r/software | 19 Apr 2023
    If you're not afraid to get a little technical, you could take a look at mozilla/DeepSpeech (installation & usage docs here).
  • Web Speech API is (still) broken on Linux circa 2023
    8 projects | /r/javascript | 15 Apr 2023
    There is a lot of TTS and SST development going on (https://github.com/mozilla/TTS; https://github.com/mozilla/DeepSpeech; https://github.com/common-voice/common-voice). That is the only way they work: Contributions from the wild.
  • Deepspeech /common voice.
    1 project | /r/mozilla | 14 Apr 2023
  • Mozilla Launches Responsible AI Challenge
    2 projects | news.ycombinator.com | 15 Mar 2023
    Mozilla did release DeepSpeech[0] and Firefox Translation[1] (the latter of which they included in Firefox, to offer client-side webpage translations.)

    They definitely have fewer resources than OpenAI, and they do not produce SOTA research (their publications have plummeted to 1/year anyway[2]). So the only way for them to make progress is to seek government grants or make challenges like these.

    This challenge is unlikely to be profitable for the winning team: the expected value of winnings are likely around $1K when taking into account the probability that another team gets a better rank, but ML research projects are often more expensive (recently, Alpaca spent upwards of $600 on computation alone; and of course pretraining large models is much more expensive). So the main gain will be publicity.

    [0]: https://github.com/mozilla/deepspeech

    [1]: https://github.com/mozilla/firefox-translations/

    [2]: https://research.mozilla.org/

What are some alternatives?

When comparing Kaldi Speech Recognition Toolkit and DeepSpeech you can also consider the following projects:

vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node

NeMo - NeMo: a framework for generative AI

pyannote-audio - Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding

picovoice - On-device voice assistant platform powered by deep learning

speech-and-text-unity-ios-android - Speed to text in Unity iOS use Native Speech Recognition

STT - 🐸STT - The deep learning toolkit for Speech-to-Text. Training and deploying STT models has never been so easy.

espnet - End-to-End Speech Processing Toolkit

TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production

rhasspy - Offline private voice assistant for many human languages

PaddleSpeech - Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.

bert-for-inference - A small repo showing how to easily use BERT (or other transformers) for inference

dicio-android - Dicio assistant app for Android