OpenAdapt
CogVLM
OpenAdapt | CogVLM | |
---|---|---|
25 | 16 | |
538 | 5,193 | |
48.3% | 10.2% | |
9.3 | 9.0 | |
3 days ago | 29 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
OpenAdapt
- Rabbit R1 can be run on a Android device
- OpenAdapt: AI-First Process Automation with Large Multimodal Models
- Adapter between LMMs and traditional desktop and web GUI
-
I Witnessed the Future of AI, and It's a Broken Toy
> Rabbit has said the device will be able to learn any app, if you teach it.
We're building this over at https://github.com/OpenAdaptAI/OpenAdapt. OpenAdapt learns to automate tasks in desktop apps by observing human demonstrations.
Early demo: https://twitter.com/abrichr/status/1784307190062342237 (more coming soon!)
The demo is overly simplistic to keep it short -- it also works with arbitrary applications and operations.
Also, we're open source. Contributions and feedback are welcome and encouraged :)
-
Memary is a cutting-edge long-term memory system based on a knowledge graph
Very interesting, thank you for making this available!
At OpenAdapt (https://github.com/OpenAdaptAI/OpenAdapt) we are looking into using pm4py (https://github.com/pm4py) to extract a process graph from a recording of user actions.
I will look into this more closely. In the meantime, could the authors share their perspective on whether Memary could be useful here?
-
Rabbit r1 source code [part 1]
See https://github.com/OpenAdaptAI/OpenAdapt for an alternative that works with desktop GUIs.
-
Survey Study on AI Agents Architectures(2024)
Not mentioned: learning from demonstration. This is the approach we are taking at https://github.com/OpenAdaptAI/OpenAdapt.
- AI-First Process Automation with LLMs/Action/Multimodal/Visual Language Models
-
Show HN: Skyvern – open-source browser automation tool
Congratulations on shipping!
Check out https://github.com/OpenAdaptAI/OpenAdapt for an open source (MIT license) alternative that also works on desktop (including Citrix!)
-
LaVague: Open-source Large Action Model to automate Selenium browsing
https://github.com/mldsai/puterbot is designed for all desktop applications, including browsers. We're also working on a chrome extension to support reading/writing directly to DOM: https://github.com/OpenAdaptAI/OpenAdapt/pull/364
CogVLM
-
Mixtral: Mixture of Experts
CogVLM is very good in my (brief) testing: https://github.com/THUDM/CogVLM
The model weights seem to be under a non-commercial license, not true open source, but it is "open access" as you requested.
-
IT Employment Grew by Just 700 Jobs in 2023, Down From 267,000 in 2022
increasing growth most places in world
https://twitter.com/elonmusk/status/1743028102446408026
heres a total feature map of what was released in 2023:
https://twitter.com/enriquebrgn/status/1740950767325024387
I think thats definitely a signal that the B and C teams werent needed, considering they cut 90% of staff LOL.
As for the bots, AI is making it easier than ever to bypass those systems. CogVLM is just sitting there menacingly on github https://github.com/THUDM/CogVLM
- Show HN: I built an open source AI video search engine to learn more about AI
-
CogAgent-18B – visual-based GUI Agent capabilities
Jump to heading for benchmarks and examples: https://github.com/THUDM/CogVLM/tree/main?tab=readme-ov-file...
-
What do you think. When should we expect the next SDXL version?
Honestly at this point there is no need for human for captioning except maybe for NSFW content. Img2text is just good enough for nearly all images. GPTVision or open source equivalent (like CogVLM https://github.com/THUDM/CogVLM ) are just good enough.
-
shinning the spotlight on CogVLM
A core Llama.cpp contributor, named cmp-nct, discovered stumbled upon what might be the next leap forward for vision/language models. CogVLM (which uses a Vicuna 7B language model combined with a 9B vision tower) excels particularly in OCR (Optical Character Recognition), detail detection, and minimal hallucinations. It effectively understands both handwritten and typed text, context, fine details, and background graphics. It even provides pixel coordinates for small visual targets. CovVLM surpasses other models like llava-1.5 and Qwen-VL in performance.
-
Image-to-Caption Generator
https://github.com/THUDM/CogVLM (really impressive)
-
Gemini: Google's most capable AI model yet
I'm researching using LLMs for alt-text suggestion for forum users, can you share your finding so far?
Outside of GPT-4V I had good first results with https://github.com/THUDM/CogVLM
-
Open-source LLMs with Image Interpretation
I've got some decent results with CogVLM. Resolution kinda sucks at 490x490, though.
- FLaNK Stack Weekly for 27 November 2023
What are some alternatives?
ios-mail - Secure email that protects your privacy
LLaVA - [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
adept-inference - Inference code for Persimmon-8B
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
IfcOpenShell - Open source IFC library and geometry engine
Qwen-VL - The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
strawberry - A GraphQL library for Python that leverages type annotations 🍓
vimGPT - Browse the web with GPT-4V and Vimium
uform - Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts, images, and 🔜 video, up to 5x faster than OpenAI CLIP and LLaVA 🖼️ & 🖋️