Porcupine   VS DeepSpeech

Compare Porcupine   vs DeepSpeech and see what are their differences.

DeepSpeech

DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers. (by mozilla)
InfluxDB – Built for High-Performance Time Series Workloads
InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
Porcupine   DeepSpeech
31 68
4,088 26,309
2.0% 0.7%
8.9 0.0
7 days ago 8 months ago
Python C++
Apache License 2.0 Mozilla Public License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Porcupine  

Posts with mentions or reviews of Porcupine  . We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-16.
  • I made a ChatGPT virtual assistant that you can talk to
    1 project | /r/ArtificialInteligence | 5 Apr 2023
    I call it DaVinci. DaVinci uses Picovoice (https://picovoice.ai/) solutions for wake word and voice activity detection and for converting speech to text, Amazon Polly to convert its responses into a natural sounding voice, and OpenAI’s GPT 3.5 to do the heavy lifting. It’s all contained in about 300 lines of Python code.
  • Speech Recognition in Unity: Adding Voice Input
    3 projects | dev.to | 16 Feb 2023
    Download pre-trained models: "Porcupine" from Porcupine Wake Word and Video Player Context from Rhino Speech-to-Intent repositories - You can also train a custom models on Picovoice Console.
  • Speech Recognition with SwiftUI
    5 projects | dev.to | 13 Feb 2023
    Below are some useful resources: Open-source code Picovoice Platform SDK Picovoice website
  • Speech Recognition with Angular
    1 project | dev.to | 8 Feb 2023
    Download the Porcupine model and turn the binary model into a base64 string.
  • OK Google, Add Hotword Detection to Chrome
    1 project | dev.to | 3 Feb 2023
    Download Porcupine (i.e. Deep Neural Network). Run the following to turn the binary model into a base64 string, from the project folder.
  • Hotword Detection for MCUs
    1 project | dev.to | 31 Jan 2023
    Porcupine SDK Porcupine SDK is on GitHub. Find libraries for supported MCUs on the Porcupine GitHub repository. Arduino libraries are available via a specialized package manager offered by Arduino.
  • Day 12: Always Listening Voice Commands with React.js
    1 project | dev.to | 17 Jan 2023
    Looking for more? Explore other languages on the Picovoice Console and check out for fully-working demos with Porcupine on GitHub.
  • Day 6: Making Cool Raspberry Pi Projects even Cooler with Voice AI (1/4)
    1 project | dev.to | 9 Jan 2023
    Don't forget to visit Porcupine's Wake Word's Github repository to see Python demos. If you want to do something similar to the video above, find the open-source codes here
  • Voice Assistant app in Haskell
    8 projects | /r/haskell | 3 Jan 2023
  • What does "end-to-end" mean?
    1 project | /r/embedded | 17 Dec 2022
    I sometimes see the term "end-to-end", and it always passes right by my ears as marketing jargon. For example, there was a recent post today that linked to this page: https://picovoice.ai/, and you'll find the statement "... end-to-end platform for adding voice to anything on your terms". I did a quick Google search and it seems like the term is used in many different contexts (e.g., encryption, enterprise software for product development, etc.), but to be honest, I'm just not getting it. Maybe someone can explain here within the realm of embedded software? Could you provide some examples as well?

DeepSpeech

Posts with mentions or reviews of DeepSpeech. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-01.
  • ESpeak-ng: speech synthesizer with more than one hundred languages and accents
    21 projects | news.ycombinator.com | 1 May 2024
    As I understand it DeepSpeech is no longer actively maintained by Mozilla: https://github.com/mozilla/DeepSpeech/issues/3693

    For Text To Speech, I've found Piper TTS useful (for situations where "quality"=="realistic"/"natual"): https://github.com/rhasspy/piper

    For Speech to Text (which AIUI DeepSpeech provided), I've had some success with Vosk: https://github.com/alphacep/vosk-api

  • Common Voice
    5 projects | news.ycombinator.com | 5 Dec 2023
  • Ask HN: Speech to text models, are they usable yet?
    2 projects | news.ycombinator.com | 22 Oct 2023
  • Looking to recreate a cool AI assistant project with free tools
    3 projects | /r/selfhosted | 2 Aug 2023
    - [DeepSpeech](https://github.com/mozilla/DeepSpeech) rather than Whisper for offline speech-to-text
    3 projects | /r/techsupport | 2 Aug 2023
    I came across a very interesting [project]( (4) Mckay Wrigley on Twitter: "My goal is to (hopefully!) add my house to the dataset over time so that I have an indoor assistant with knowledge of my surroundings. It’s basically just a slow process of building a good enough dataset. I hacked this together for 2 reasons: 1) It was fun, and I wanted to…" / X ) made by Mckay Wrigley and I was wondering what's the easiest way to implement it using free, open-source software. Here's what he used originally, followed by some open source candidates I'm considering but would love feedback and advice before starting: Original Tools: - YoloV8 does the heavy lifting with the object detection - OpenAI Whisper handles voice - GPT-4 handles the “AI” - Google Custom Search Engine handles web browsing - MacOS/iOS handles streaming the video from my iPhone to my Mac - Python for the rest Open Source Alternatives: - [ OpenCV](https://opencv.org/) instead of YoloV8 for computer vision and object detection - Replacing GPT-4 is still a challenge as I know there are some good open-source LLms like Llama 2, but I don't know how to apply this in the code perhaps in the form of api - [DeepSpeech](https://github.com/mozilla/DeepSpeech) rather than Whisper for offline speech-to-text - [Coqui TTS](https://github.com/coqui-ai/TTS) instead of Whisper for text-to-speech - Browser automation with [Selenium](https://www.selenium.dev/) instead of Google Custom Search - Stream video from phone via RTSP instead of iOS integration - Python for rest of code I'm new to working with tools like OpenCV, DeepSpeech, etc so would love any advice on the best way to replicate the original project in an open source way before I dive in. Are there any good guides or better resources out there? What are some pitfalls to avoid? Any help is much appreciated!
  • Speech-to-Text in Real Time
    1 project | news.ycombinator.com | 16 Jul 2023
  • Linux Mint XFCE
    1 project | /r/linuxbrasil | 29 Apr 2023
    algo assim? https://github.com/mozilla/DeepSpeech
  • Are there any secure and free auto transcription software ?
    2 projects | /r/software | 19 Apr 2023
    If you're not afraid to get a little technical, you could take a look at mozilla/DeepSpeech (installation & usage docs here).
  • Web Speech API is (still) broken on Linux circa 2023
    8 projects | /r/javascript | 15 Apr 2023
    There is a lot of TTS and SST development going on (https://github.com/mozilla/TTS; https://github.com/mozilla/DeepSpeech; https://github.com/common-voice/common-voice). That is the only way they work: Contributions from the wild.
  • Deepspeech /common voice.
    1 project | /r/mozilla | 14 Apr 2023

What are some alternatives?

When comparing Porcupine   and DeepSpeech you can also consider the following projects:

RandomUserSwift - 👤 Framework to Generate Random Users - An Unofficial Swift SDK for randomuser.me

Kaldi Speech Recognition Toolkit - kaldi-asr/kaldi is the official location of the Kaldi project.

rides-ios-sdk - Uber iOS SDK (beta)

dicio-android - Dicio assistant app for Android

mxnet - Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

PaddleSpeech - Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.

InfluxDB – Built for High-Performance Time Series Workloads
InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured

Did you know that Python is
the 2nd most popular programming language
based on number of references?