LocalAIVoiceChat
vimGPT
LocalAIVoiceChat | vimGPT | |
---|---|---|
4 | 7 | |
325 | 2,474 | |
- | - | |
7.0 | 7.4 | |
6 days ago | 25 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LocalAIVoiceChat
-
Show HN: Open-source macOS AI copilot (using vision and voice)
Was following these two projects by someuser on Github which makes similar things possible with Local models. Sending screenshot to openai is expensive , if done every few seconds or minutes.
https://github.com/KoljaB/LocalAIVoiceChat
While the below one uses openai - don't see why it can't be replaced with above project and local mode.
https://github.com/KoljaB/Linguflex
-
ChatGPT Voice Announced (By Greg Brockman)
What a coincidence, was just looking something similar for local models and stumbled up on this, his Repo seems full of TTS/STT projects..
https://github.com/KoljaB/LocalAIVoiceChat
- FLaNK Stack Weekly for 13 November 2023
-
Introducing: a local realtime talkbot
Code: If you're curious, want to chip in, or just want to take a look, here's the link to the Github.
vimGPT
- Show HN: Skyvern – open-source browser automation tool
- FLaNK Stack Weekly for 13 November 2023
- vimGPT ist ein experimentelles Tool, dass mit GPT-4 Vision und dem Chrome Plugin Vimium ChatGPT optisch durch das Internet browsen lässt
-
Using GPT-4 Vision with Vimium to browse the web
It's insane that this is now possible:
https://github.com/ishan0102/vimGPT/blob/682b5e539541cd6d710...
> "You need to choose which action to take to help a user do this task: {objective}. Your options are navigate, type, click, and done. Navigate should take you to the specified URL. Type and click take strings where if you want to click on an object, return the string with the yellow character sequence you want to click on, and to type just a string with the message you want to type. For clicks, please only respond with the 1-2 letter sequence in the yellow box, and if there are multiple valid options choose the one you think a user would select. For typing, please return a click to click on the box along with a type with the message to write. When the page seems satisfactory, return done as a key with no value. You must respond in JSON only with no other fluff or bad things will happen. The JSON keys must ONLY be one of navigate, type, or click. Do not return the JSON inside a code block."
What are some alternatives?
llamafile - Distribute and run LLMs with a single file.
CogVLM - a state-of-the-art-level open visual language model | 多模态预训练模型
cucim - cuCIM - RAPIDS GPU-accelerated image processing library
LLaVA - [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
wubloader
CoC2023 - Community over Code, Apache NiFi, Apache Kafka, Apache Flink, Python, GTFS, Transit, Open Source, Open Data
wave - Realtime Web Apps and Dashboards for Python and R
BrowserBox - 🌀 Browse the web from a browser you run on a server, rather than on your local device. Lightweight virtual browser. For security, privacy and more! By https://github.com/dosyago
PyMISP - Python library using the MISP Rest API
engblogs - learn from your favorite tech companies
FLaNK-Halifax - Community over Code, Apache NiFi, Apache Kafka, Apache Flink, Python, GTFS, Transit, Open Source, Open Data