OpenAdapt
vimGPT
OpenAdapt | vimGPT | |
---|---|---|
28 | 7 | |
681 | 2,515 | |
30.5% | - | |
9.3 | 7.4 | |
3 days ago | about 2 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
OpenAdapt
-
Llama 3-V: Matching GPT4-V with a 100x smaller model and 500 dollars
Our initial testing suggests MiniCPM outperforms InternVL for GUI understanding: https://github.com/OpenAdaptAI/OpenAdapt/issues/637#issuecom...
(InternVL appears to hallucinate more.)
-
Why MSFT Copilot+ and AI PCs are the final nail in the coffin of open computing
We have Linux support on the roadmap in https://github.com/OpenAdaptAI/OpenAdapt.
OpenAdapt has similar functionality, except:
- it's open source
- it only records when you explicitly tell it to
- it has multiple PII/PHI scrubbing providers built in (see https://github.com/OpenAdaptAI/OpenAdapt?tab=readme-ov-file#...)
- the purpose for recording is to automate tasks in desktop apps
- it's cross platform (Mac and Windows now, Linux coming soon)
Full disclosure: I'm the primary author. Feedback welcome!
-
PaliGemma: Open-Source Multimodal Model by Google
Excited to test how this performs compared to MiniCPMv2, especially when analyzing GUI images: https://github.com/OpenAdaptAI/OpenAdapt/issues/637
-
Show HN: Tarsier – vision for text-only LLM web agents that beats GPT-4o
Congratulations on shipping!
In https://github.com/OpenAdaptAI/OpenAdapt/blob/main/openadapt... we use FastSAM to first segment the UI elements, then have the LLM describe each segment individually. This seems to work quite well; see https://twitter.com/OpenAdaptAI/status/1789430587314336212 for a demo.
More coming soon!
- GPT-4o
- Rabbit R1 can be run on a Android device
- OpenAdapt: AI-First Process Automation with Large Multimodal Models
- Adapter between LMMs and traditional desktop and web GUI
-
I Witnessed the Future of AI, and It's a Broken Toy
> Rabbit has said the device will be able to learn any app, if you teach it.
We're building this over at https://github.com/OpenAdaptAI/OpenAdapt. OpenAdapt learns to automate tasks in desktop apps by observing human demonstrations.
Early demo: https://twitter.com/abrichr/status/1784307190062342237 (more coming soon!)
The demo is overly simplistic to keep it short -- it also works with arbitrary applications and operations.
Also, we're open source. Contributions and feedback are welcome and encouraged :)
-
Memary is a cutting-edge long-term memory system based on a knowledge graph
Very interesting, thank you for making this available!
At OpenAdapt (https://github.com/OpenAdaptAI/OpenAdapt) we are looking into using pm4py (https://github.com/pm4py) to extract a process graph from a recording of user actions.
I will look into this more closely. In the meantime, could the authors share their perspective on whether Memary could be useful here?
vimGPT
-
Show HN: Tarsier – vision for text-only LLM web agents that beats GPT-4o
How does the performance compare to VimGPT[0]?
I assume the screenshot-based approach is similar, whereas the text approach should be improved?
Very cool either way!
[0] https://github.com/ishan0102/vimGPT
- Show HN: Skyvern – open-source browser automation tool
- FLaNK Stack Weekly for 13 November 2023
- vimGPT ist ein experimentelles Tool, dass mit GPT-4 Vision und dem Chrome Plugin Vimium ChatGPT optisch durch das Internet browsen lässt
-
Using GPT-4 Vision with Vimium to browse the web
It's insane that this is now possible:
https://github.com/ishan0102/vimGPT/blob/682b5e539541cd6d710...
> "You need to choose which action to take to help a user do this task: {objective}. Your options are navigate, type, click, and done. Navigate should take you to the specified URL. Type and click take strings where if you want to click on an object, return the string with the yellow character sequence you want to click on, and to type just a string with the message you want to type. For clicks, please only respond with the 1-2 letter sequence in the yellow box, and if there are multiple valid options choose the one you think a user would select. For typing, please return a click to click on the box along with a type with the message to write. When the page seems satisfactory, return done as a key with no value. You must respond in JSON only with no other fluff or bad things will happen. The JSON keys must ONLY be one of navigate, type, or click. Do not return the JSON inside a code block."
What are some alternatives?
ios-mail - Secure email that protects your privacy
CogVLM - a state-of-the-art-level open visual language model | 多模态预训练模型
LLaVA - [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
CoC2023 - Community over Code, Apache NiFi, Apache Kafka, Apache Flink, Python, GTFS, Transit, Open Source, Open Data
adept-inference - Inference code for Persimmon-8B
BrowserBox - 🌀 Browse the web from a browser you run on a server, rather than on your local device. Lightweight virtual browser. For security, privacy and more! By https://github.com/dosyago
IfcOpenShell - Open source IFC library and geometry engine
FLaNK-Halifax - Community over Code, Apache NiFi, Apache Kafka, Apache Flink, Python, GTFS, Transit, Open Source, Open Data
strawberry - A GraphQL library for Python that leverages type annotations 🍓
PyMISP - Python library using the MISP Rest API