CogVLM
ComfyUI
CogVLM | ComfyUI | |
---|---|---|
16 | 125 | |
5,193 | 34,594 | |
10.2% | - | |
9.0 | 9.9 | |
28 days ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CogVLM
-
Mixtral: Mixture of Experts
CogVLM is very good in my (brief) testing: https://github.com/THUDM/CogVLM
The model weights seem to be under a non-commercial license, not true open source, but it is "open access" as you requested.
-
IT Employment Grew by Just 700 Jobs in 2023, Down From 267,000 in 2022
increasing growth most places in world
https://twitter.com/elonmusk/status/1743028102446408026
heres a total feature map of what was released in 2023:
https://twitter.com/enriquebrgn/status/1740950767325024387
I think thats definitely a signal that the B and C teams werent needed, considering they cut 90% of staff LOL.
As for the bots, AI is making it easier than ever to bypass those systems. CogVLM is just sitting there menacingly on github https://github.com/THUDM/CogVLM
- Show HN: I built an open source AI video search engine to learn more about AI
-
CogAgent-18B – visual-based GUI Agent capabilities
Jump to heading for benchmarks and examples: https://github.com/THUDM/CogVLM/tree/main?tab=readme-ov-file...
-
What do you think. When should we expect the next SDXL version?
Honestly at this point there is no need for human for captioning except maybe for NSFW content. Img2text is just good enough for nearly all images. GPTVision or open source equivalent (like CogVLM https://github.com/THUDM/CogVLM ) are just good enough.
-
shinning the spotlight on CogVLM
A core Llama.cpp contributor, named cmp-nct, discovered stumbled upon what might be the next leap forward for vision/language models. CogVLM (which uses a Vicuna 7B language model combined with a 9B vision tower) excels particularly in OCR (Optical Character Recognition), detail detection, and minimal hallucinations. It effectively understands both handwritten and typed text, context, fine details, and background graphics. It even provides pixel coordinates for small visual targets. CovVLM surpasses other models like llava-1.5 and Qwen-VL in performance.
-
Image-to-Caption Generator
https://github.com/THUDM/CogVLM (really impressive)
-
Gemini: Google's most capable AI model yet
I'm researching using LLMs for alt-text suggestion for forum users, can you share your finding so far?
Outside of GPT-4V I had good first results with https://github.com/THUDM/CogVLM
-
Open-source LLMs with Image Interpretation
I've got some decent results with CogVLM. Resolution kinda sucks at 490x490, though.
- FLaNK Stack Weekly for 27 November 2023
ComfyUI
-
ComflowySpace: An open-source version of better ComfyUI
The non standard licensing puts me off in contributing or using this. It is frustrating how the phrase opensource has been diluted in the AI/ML community. ComfyUI has a GPL license [1] while this project uses this [2]. I honestly don't know where I stand since this is a legal document using non-standard phrasing to describe how the rights around the source code.
This is a project that uses a custom license with less rights provided than the ComfyUI project it self-describes as improving. Am not sure the title is reflective of the project.
[1] - https://github.com/comfyanonymous/ComfyUI/blob/master/LICENS...
-
Show HN: I made an app to use local AI as daily driver
* LLaVA model: I'll add more documentation. You are right Llava could not generate images. For image generation I don't have immediate plans, but checkout these projects for local image generation.
- https://diffusionbee.com/
- https://github.com/comfyanonymous/ComfyUI
- https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
Show HN: ML Blocks – Deploy multimodal AI workflows without code
Check out ComfyUI for a much more advanced and open source version of this.
https://github.com/comfyanonymous/ComfyUI
-
Stable Code 3B: Coding on the Edge
I use Stable Diffusion family models for innovative art products.
On a small scale, you have to professionalize ComfyUI’s development. My PR to make it installable and to make a plugin ecosystem that makes sense should not be sitting unmerged (https://github.com/comfyanonymous/ComfyUI/pull/298).
On a medium scale, CLIP is holding you back. I would eagerly buy a 48GB card to accommodate a batch size 1, gradient checkpointed LoRA-trainable model with T5 for conditioning. I want PixArt-a or DeepFloyd/IF with the SDXL dataset and training. I get I can achieve so much with SDXL on 24GB, including just barely a fine tuning, I understand the engineering decisions here, but it’s too weak on prompts.
On a large scale, I’m willing to spend a little money up front. In those conditions you can be far more innovative, you don’t have to make everything for $0. Shane Carruth didn’t make Primer for $0. I’m sure you’ve seen this movie, you get how astoundingly good it is. But he still spent something. He spent only slightly more than an RTX 6000 Ada.
Innovators have budgets. It’s still worth releasing the most powerful possible model for expensive hardware, this is why everyone is talking about Mixtral, but it’s especially true of visual art.
-
Show HN: Comflowy – A ComfyUI Tutorial for Beginners
It's litegraph.js [1] and seems to be the only lib they include in /web [2] :
[1] https://github.com/jagenjo/litegraph.js
[2] https://github.com/comfyanonymous/ComfyUI/tree/master/web/li...
-
ComfyUI on Windows 7?
It's possibly you might get a later version of Comfy working, but I had no success with this method and the 1st Sept version of Comfy. The older versions are here under Assets... https://github.com/comfyanonymous/ComfyUI/releases
-
Seeking out an experienced and empathetic coding buddy.
That said, please do learn coding and don't get discouraged when somebody says to learn PyTorch or recommends using a Jupiter notebook with no further information on how to translate the skill into images. I would highly recommend some short term goals. Get your feet wet by taking apart the UIs. The comfy API documentation is here and the A1111 API documentation is here. There is a difference in completeness, welcome to programming. Writing nodes or plugins is also a good way to jump into this world. Custom wildcard logic might be very attractive to you if you aren't the type that want to deal with a nested file structure to simulate logic.
-
Need help installating ComfyUI
For example ComfyUI can simply be downloaded and run using the portable version (https://github.com/comfyanonymous/ComfyUI/releases/download/latest/ComfyUI_windows_portable_nvidia_cu121_or_cpu.7z) if your not comfortable using GIT etc.
-
Installing cumfyui manager on MacBook
https://github.com/comfyanonymous/ComfyUI scroll down to "Install"
- SAG (Self-Attention Guidance) for ComfyUI is here!
What are some alternatives?
LLaVA - [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
stable-diffusion-webui - Stable Diffusion web UI
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
Qwen-VL - The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
vimGPT - Browse the web with GPT-4V and Vimium
sd-webui-controlnet - WebUI extension for ControlNet
uform - Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts, images, and 🔜 video, up to 5x faster than OpenAI CLIP and LLaVA 🖼️ & 🖋️
openOutpaint - local offline javascript and html canvas outpainting gizmo for stable diffusion webUI API 🐠
LinkBERT - [ACL 2022] LinkBERT: A Knowledgeable Language Model 😎 Pretrained with Document Links
a1111-nevysha-comfy-ui - A collection of tweak to improve Auto1111 UI//UX [Moved to: https://github.com/Nevysha/Cozy-Nest]