LocalAIVoiceChat
llamafile
LocalAIVoiceChat | llamafile | |
---|---|---|
4 | 36 | |
325 | 15,410 | |
- | 30.4% | |
7.0 | 9.6 | |
6 days ago | 7 days ago | |
Python | C++ | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LocalAIVoiceChat
-
Show HN: Open-source macOS AI copilot (using vision and voice)
Was following these two projects by someuser on Github which makes similar things possible with Local models. Sending screenshot to openai is expensive , if done every few seconds or minutes.
https://github.com/KoljaB/LocalAIVoiceChat
While the below one uses openai - don't see why it can't be replaced with above project and local mode.
https://github.com/KoljaB/Linguflex
-
ChatGPT Voice Announced (By Greg Brockman)
What a coincidence, was just looking something similar for local models and stumbled up on this, his Repo seems full of TTS/STT projects..
https://github.com/KoljaB/LocalAIVoiceChat
- FLaNK Stack Weekly for 13 November 2023
-
Introducing: a local realtime talkbot
Code: If you're curious, want to chip in, or just want to take a look, here's the link to the Github.
llamafile
- FLaNK-AIM Weekly 06 May 2024
- llamafile v0.8
-
Mistral AI Launches New 8x22B Moe Model
I think the llamafile[0] system works the best. Binary works on the command line or launches a mini webserver. Llamafile offers builds of Mixtral-8x7B-Instruct, so presumably they may package this one up as well (potentially a quantized format).
You would have to confirm with someone deeper in the ecosystem, but I think you should be able to run this new model as is against a llamafile?
[0] https://github.com/Mozilla-Ocho/llamafile
-
Apple Explores Home Robotics as Potential 'Next Big Thing'
Thermostats: https://www.sinopetech.com/en/products/thermostat/
I haven't tried running a local text-to-speech engine backed by an LLM to control Home Assistant. Maybe someone is working on this already?
TTS: https://github.com/SYSTRAN/faster-whisper
LLM: https://github.com/Mozilla-Ocho/llamafile/releases
LLM: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-D...
It would take some tweaking to get the voice commands working correctly.
-
LLaMA Now Goes Faster on CPUs
While I did not succeed in making the matmul code from https://github.com/Mozilla-Ocho/llamafile/blob/main/llamafil... work in isolation, I compared eigen, openblas, and mkl: https://gist.github.com/Dobiasd/e664c681c4a7933ef5d2df7caa87...
In this (very primitive!) benchmark, MKL was a bit better than eigen (~10%) on my machine (i5-6600).
Since the article https://justine.lol/matmul/ compared the new kernels with MLK, we can (by transitivity) compare the new kernels with Eigen this way, at least very roughly for this one use-case.
-
Llamafile 0.7 Brings AVX-512 Support: 10x Faster Prompt Eval Times for AMD Zen 4
Yes, they're just ZIP files that also happen to be actually portable executables.
https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file...
-
Show HN: I made an app to use local AI as daily driver
have you seen llamafile[0]?
[0] https://github.com/Mozilla-Ocho/llamafile
- FLaNK Stack 26 February 2024
-
Gemma.cpp: lightweight, standalone C++ inference engine for Gemma models
llama.cpp has integrated gemma support. So you can use llamafile for this. It is a standalone executable that is portable across most popular OSes.
https://github.com/Mozilla-Ocho/llamafile/releases
So, download the executable from the releases page under assets. You want either just main or just server. Don't get the huge ones with the model inlined in the file. The executable is about 30MB in size,
https://github.com/Mozilla-Ocho/llamafile/releases/download/...
-
Ollama releases OpenAI API compatibility
The improvements in ease of use for locally hosting LLMs over the last few months have been amazing. I was ranting about how easy https://github.com/Mozilla-Ocho/llamafile is just a few hours ago [1]. Now I'm torn as to which one to use :)
1: Quite literally hours ago: https://euri.ca/blog/2024-llm-self-hosting-is-easy-now/
What are some alternatives?
vimGPT - Browse the web with GPT-4V and Vimium
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
cucim - cuCIM - RAPIDS GPU-accelerated image processing library
ollama-webui - ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI) [Moved to: https://github.com/open-webui/open-webui]
wubloader
langchain - 🦜🔗 Build context-aware reasoning applications
wave - Realtime Web Apps and Dashboards for Python and R
LLaVA - [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
PyMISP - Python library using the MISP Rest API
llama.cpp - LLM inference in C/C++
engblogs - learn from your favorite tech companies
safetensors - Simple, safe way to store and distribute tensors