CogVLM
vimGPT
CogVLM | vimGPT | |
---|---|---|
16 | 6 | |
5,193 | 2,466 | |
10.2% | - | |
9.0 | 7.4 | |
28 days ago | 18 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CogVLM
-
Mixtral: Mixture of Experts
CogVLM is very good in my (brief) testing: https://github.com/THUDM/CogVLM
The model weights seem to be under a non-commercial license, not true open source, but it is "open access" as you requested.
-
IT Employment Grew by Just 700 Jobs in 2023, Down From 267,000 in 2022
increasing growth most places in world
https://twitter.com/elonmusk/status/1743028102446408026
heres a total feature map of what was released in 2023:
https://twitter.com/enriquebrgn/status/1740950767325024387
I think thats definitely a signal that the B and C teams werent needed, considering they cut 90% of staff LOL.
As for the bots, AI is making it easier than ever to bypass those systems. CogVLM is just sitting there menacingly on github https://github.com/THUDM/CogVLM
- Show HN: I built an open source AI video search engine to learn more about AI
-
CogAgent-18B – visual-based GUI Agent capabilities
Jump to heading for benchmarks and examples: https://github.com/THUDM/CogVLM/tree/main?tab=readme-ov-file...
-
What do you think. When should we expect the next SDXL version?
Honestly at this point there is no need for human for captioning except maybe for NSFW content. Img2text is just good enough for nearly all images. GPTVision or open source equivalent (like CogVLM https://github.com/THUDM/CogVLM ) are just good enough.
-
shinning the spotlight on CogVLM
A core Llama.cpp contributor, named cmp-nct, discovered stumbled upon what might be the next leap forward for vision/language models. CogVLM (which uses a Vicuna 7B language model combined with a 9B vision tower) excels particularly in OCR (Optical Character Recognition), detail detection, and minimal hallucinations. It effectively understands both handwritten and typed text, context, fine details, and background graphics. It even provides pixel coordinates for small visual targets. CovVLM surpasses other models like llava-1.5 and Qwen-VL in performance.
-
Image-to-Caption Generator
https://github.com/THUDM/CogVLM (really impressive)
-
Gemini: Google's most capable AI model yet
I'm researching using LLMs for alt-text suggestion for forum users, can you share your finding so far?
Outside of GPT-4V I had good first results with https://github.com/THUDM/CogVLM
-
Open-source LLMs with Image Interpretation
I've got some decent results with CogVLM. Resolution kinda sucks at 490x490, though.
- FLaNK Stack Weekly for 27 November 2023
vimGPT
- Show HN: Skyvern – open-source browser automation tool
- FLaNK Stack Weekly for 13 November 2023
- vimGPT ist ein experimentelles Tool, dass mit GPT-4 Vision und dem Chrome Plugin Vimium ChatGPT optisch durch das Internet browsen lässt
-
Using GPT-4 Vision with Vimium to browse the web
It's insane that this is now possible:
https://github.com/ishan0102/vimGPT/blob/682b5e539541cd6d710...
> "You need to choose which action to take to help a user do this task: {objective}. Your options are navigate, type, click, and done. Navigate should take you to the specified URL. Type and click take strings where if you want to click on an object, return the string with the yellow character sequence you want to click on, and to type just a string with the message you want to type. For clicks, please only respond with the 1-2 letter sequence in the yellow box, and if there are multiple valid options choose the one you think a user would select. For typing, please return a click to click on the box along with a type with the message to write. When the page seems satisfactory, return done as a key with no value. You must respond in JSON only with no other fluff or bad things will happen. The JSON keys must ONLY be one of navigate, type, or click. Do not return the JSON inside a code block."
What are some alternatives?
LLaVA - [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
CoC2023 - Community over Code, Apache NiFi, Apache Kafka, Apache Flink, Python, GTFS, Transit, Open Source, Open Data
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
BrowserBox - 🌀 Browse the web from a browser you run on a server, rather than on your local device. Lightweight virtual browser. For security, privacy and more! By https://github.com/dosyago
Qwen-VL - The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
PyMISP - Python library using the MISP Rest API
uform - Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts, images, and 🔜 video, up to 5x faster than OpenAI CLIP and LLaVA 🖼️ & 🖋️
FLaNK-Halifax - Community over Code, Apache NiFi, Apache Kafka, Apache Flink, Python, GTFS, Transit, Open Source, Open Data
LinkBERT - [ACL 2022] LinkBERT: A Knowledgeable Language Model 😎 Pretrained with Document Links
GPT-V-on-Web - 👀🧠 GPT-4 Vision x 💪⌨️ Vimium = Autonomous Web Agent