mycroft-core
Porcupine
mycroft-core | Porcupine | |
---|---|---|
212 | 31 | |
6,456 | 3,452 | |
0.3% | 1.2% | |
0.0 | 9.0 | |
11 days ago | 7 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mycroft-core
-
Rabbit R1, Designed by Teenage Engineering
It's indeed suspicious. You're sending your voice samples, your various services accounts, your location and more private data to some proprietary black box in some public cloud. Sorry, but this is a privacy nightmare. It should be open source and self-hosted like Mycroft (https://mycroft.ai) or Leon (https://getleon.ai) to be trustworthy.
-
Finally! Kernel 6.6.6 has been released
Shouldn't this be Mycroft on this sub?
-
Mycroft
I was expecting this to be about Mycroft the AI assistant ( https://mycroft.ai/ ).
- Ask HN: Is there any open source/open hardware Echo Dot alike?
-
Coral TPU Dev Board for speech-to-text and nvidia agx as host running LLaMA??
But I would recommend writing some proper glue logic in Python and use the socket function for communication. But if you really want to get rid of Alexa, it's probably worth it to set up mycroft.ai or another open source assistant.
-
Matter hasn't revolutionized the smart home yet, but AI may be about to change that - the TechRadar article claims most people don't have smart homes, just connected homes.
https://mycroft.ai/ is a sophisticated open source replacement for Siri/Alexa … you can buy their premade hardware version for $399
-
Local AI -- A semi-reliable copy of human knowledge that can live in a box in your kitchen
To add home automation, consider something like Mycroft (https://mycroft.ai/)
- Using LLaMA as a "real personal assistant"?
-
Show HN: Willow – Open-Source Privacy-Focused Voice Assistant Hardware
This project reminds me of MyCroft https://github.com/MycroftAI/mycroft-core.
-
Is Voice AI safe?
Tldr either way it depends, but if it's free, your data is prob the real product. If you don't want to get data mined, check out https://mycroft.ai
Porcupine
-
I made a ChatGPT virtual assistant that you can talk to
I call it DaVinci. DaVinci uses Picovoice (https://picovoice.ai/) solutions for wake word and voice activity detection and for converting speech to text, Amazon Polly to convert its responses into a natural sounding voice, and OpenAI’s GPT 3.5 to do the heavy lifting. It’s all contained in about 300 lines of Python code.
-
Speech Recognition in Unity: Adding Voice Input
Download pre-trained models: "Porcupine" from Porcupine Wake Word and Video Player Context from Rhino Speech-to-Intent repositories - You can also train a custom models on Picovoice Console.
-
Speech Recognition with SwiftUI
Below are some useful resources: Open-source code Picovoice Platform SDK Picovoice website
-
Speech Recognition with Angular
Download the Porcupine model and turn the binary model into a base64 string.
-
OK Google, Add Hotword Detection to Chrome
Download Porcupine (i.e. Deep Neural Network). Run the following to turn the binary model into a base64 string, from the project folder.
-
Hotword Detection for MCUs
Porcupine SDK Porcupine SDK is on GitHub. Find libraries for supported MCUs on the Porcupine GitHub repository. Arduino libraries are available via a specialized package manager offered by Arduino.
-
Day 12: Always Listening Voice Commands with React.js
Looking for more? Explore other languages on the Picovoice Console and check out for fully-working demos with Porcupine on GitHub.
-
Day 6: Making Cool Raspberry Pi Projects even Cooler with Voice AI (1/4)
Don't forget to visit Porcupine's Wake Word's Github repository to see Python demos. If you want to do something similar to the video above, find the open-source codes here
- Voice Assistant app in Haskell
-
What does "end-to-end" mean?
I sometimes see the term "end-to-end", and it always passes right by my ears as marketing jargon. For example, there was a recent post today that linked to this page: https://picovoice.ai/, and you'll find the statement "... end-to-end platform for adding voice to anything on your terms". I did a quick Google search and it seems like the term is used in many different contexts (e.g., encryption, enterprise software for product development, etc.), but to be honest, I'm just not getting it. Maybe someone can explain here within the realm of embedded software? Could you provide some examples as well?
What are some alternatives?
rhasspy - Offline private voice assistant for many human languages
snowboy - Future versions with model training module will be maintained through a forked version here: https://github.com/seasalt-ai/snowboy
Leon - 🧠 Leon is your open-source personal assistant.
mycroft-precise - A lightweight, simple-to-use, RNN wake word listener
kalliope - Kalliope is a framework that will help you to create your own personal assistant.
Caffe - Caffe: a fast open framework for deep learning.
jasper-client - Client code for Jasper voice computing platform
DeepSpeech - DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
jarvis - Jarvis is a simple IA for home automation with (multi-languages) voice commands written in Python.
mxnet - Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
J.A.R.V.I.S-project - A decent attempt to recreate J.A.R.V.I.S. from MCU's Iron Man, complete with machine learning (specifically, intent classification) [Moved to: https://github.com/Joe-Lyu/J.A.R.V.I.S-project]
Caffe2