Porcupine
nlp.js
Our great sponsors
Porcupine | nlp.js | |
---|---|---|
31 | 9 | |
3,384 | 6,042 | |
2.0% | 1.0% | |
9.1 | 3.8 | |
7 days ago | 8 days ago | |
Python | JavaScript | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Porcupine
-
Speech Recognition in Unity: Adding Voice Input
Download pre-trained models: "Porcupine" from Porcupine Wake Word and Video Player Context from Rhino Speech-to-Intent repositories - You can also train a custom models on Picovoice Console.
-
Speech Recognition with SwiftUI
Below are some useful resources: Open-source code Picovoice Platform SDK Picovoice website
In order to initialize the voice AI, we’ll need both Porcupine (.ppn) and Rhino (.rhn) model files. Picovoice has made several pre-trained Porcupine and pre-trained Rhino models available on the Picovoice GitHub repositories. For this Barista app, we’re going to use the trigger phrase Hey Barista and the Coffee Maker context.
- Voice Assistant app in Haskell
- Ask HN: Offline, Embeddable Speech Recognition?
- How to get high-quality, low-cost Speech-to-Text transcription?
- Researchers find Amazon uses Alexa voice data to target you with ads
-
Offline voice recognition on RPi Pico
I asked them to support Pi Pico, but it seems my petition would need more support from the community.
I know about PicoVoice ! They support a Cortex M4 Arduino board : Nano 33 BLE Sense. But that board is out of stock for over an year. We can do pretty cool stuff with it like this.
-
Is it possible to self host a voice assistant?
Consider looking at https://github.com/mozilla/DeepSpeech , you can get pre-compiled versions of it and there are versions that will run on a Raspberry Pi, and yes... it's all local, but your mileage may vary. And there is also https://picovoice.ai/ they run stuff locally on the machine, but again each use a constrained local language model/syntax. The other real question, as https://www.reddit.com/user/eduncan911/ correctly states is the use of the wake-word... most systems process all sound, ie. are listening all the time, a number of the Alexa or Google Assistants or similar approaches embed a smaller model or use hardware/neural networks to recognize the wake-word before passing on sound to further syntax processing, so think of most of these devices as always listening and processing and you'd be right, so factor that into power usage etc.
nlp.js
-
Couple Uncomfortable Facts About AI As It Is Right Now
nlp.js/docs/v4/nlp-intent-logics.md at master · axa-group/nlp.js · GitHub
-
[AskJS] Rate a string on how much sense it makes
For a JS based approach you could try NLP libraries like this one: https://github.com/axa-group/nlp.js
- Ich hab eine Browser-Extension erstellt, die hilft, die WM in Katar auf deutschen News-Websites zu boykottieren.
-
The full tech stack to run a chatbot — behind the scenes of an open source bot platform
To determine which chatbot intent is the best match for the user textual input, we rely on nlp.js (in JS) though we are in the process of moving to our new Python NLP server for better optimization of the needs of eCommerce conversations. A preprocessor language model is also used to improve the chances of a matching.
-
How to build your own chatbot NLP engine
Probably not. In fact, in Xatkit we aim to be a chatbot orchestration platform exactly to avoid reinventing the wheel and the non-invented here syndrome. So, in most cases, other existing platform (like DialogFlow or nlp.js) will work just fine. But we have also realized that there are always some particularly tricky bots for which you really need to be able to customize your engine to the specific chatbot semantics to get the results you want.
-
On premises chatbot
Also, if security is so important, you may want to configure Xatkit to work with nlp.js (see our wiki for instructions( so that even the intent matching part is done locally without sending the input text to the cloud (as it would happen if you decide to use, for instance, a NLP engine such as DialogFlowx)
-
Getting Rid of Dust / 1.0.0-beta.4
Since the previous release, NLP.js pushed a lot of work and has released a major version, moving from a monolithic library to multiple independent packages. So I spent some time to make Leon's NLP compatible to the latest changes.
What are some alternatives?
natural - general natural language facilities for node
snowboy - Future versions with model training module will be maintained through a forked version here: https://github.com/seasalt-ai/snowboy
mycroft-precise - A lightweight, simple-to-use, RNN wake word listener
wink-nlp - Developer friendly Natural Language Processing ✨
Caffe - Caffe: a fast open framework for deep learning.
DeepSpeech - DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
mxnet - Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Caffe2
Serpent.AI - Game Agent Framework. Helping you create AIs / Bots that learn to play any game you own!
whisper.cpp - Port of OpenAI's Whisper model in C/C++
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
silero-models - Silero Models: pre-trained speech-to-text, text-to-speech and text-enhancement models made embarrassingly simple