LAVIS
clipseg
LAVIS | clipseg | |
---|---|---|
18 | 7 | |
8,781 | 1,014 | |
2.9% | - | |
6.3 | 3.8 | |
19 days ago | 4 months ago | |
Jupyter Notebook | Python | |
BSD 3-clause "New" or "Revised" License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LAVIS
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
-
[D] Why is most Open Source AI happening outside the USA?
For multimodal, there's China (*many), then Salesforce.
-
Need help for a colab notebook running Lavis blip2_instruct_vicuna13b?
Been trying for all day to get a working inference for this example: https://github.com/salesforce/LAVIS/tree/main/projects/instructblip
-
most sane web3 job listing
There's also been big breakthroughs in computer vision. Not that long ago it was hard to recognize if a photo contained a bird; that's solved now by models like CLIP, Yolo, or Segment Anything. Now research has moved on to generating 3D scenes from images or interactively answering questions about images.
-
I work at a non-tech company and have been asked to make software that is impossible. How do I explain it to my boss?
The new hotness is multimodal vision-language models like InstructBLIP that can interactively answer questions about images. Check out the examples in the github repo, I would not have thought this was possible a few years ago.
-
Two-minute Daily AI Update (Date: 5/15/2023)
Salesforce’s BLIP family has a new member– InstructBLIP, a vision-language instruction-tuning framework using BLIP-2 models. It has achieved state-of-the-art zero-shot generalization performance on a wide range of vision-language tasks, substantially outperforming BLIP-2 and Flamingo. (Source)
-
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
Github
-
Can I use my own art as a training set?
Most of my workflows are self-made. For captioning I used Blip-2 in a custom script I made that automates the process by going into directories and their sub-directories and creates a .txt file beside each image. This way I can keep my images organized in their proper directories, without having to put dump them all in a single place.
- FLiP Stack Weekly for 13-Feb-2023
clipseg
-
How to blend a logo or clip art to a design
Following the comments to this old post, I tried to use in-painting with manual mask selection. I didn't get beautiful results but I'm sure with some tweaking, I could make it better. The main problem I had was having to manually select the area where I wanted to place the logo and trying to resize my logo mask to the fit the segment. I tried some automatic segmentation tools (Clipseg and Segment Anything). I couldn't tell the segmentation models to find a good area to for logo placement (i.e. some small flat surface). Given the complexity of what I was dealing with, I think there could be a better way (XY problem).
-
New Feature: "ZOOM ENHANCE" for the A111 WebUI. Automatically fix small details like faces and hands!
The addon utilizes clipseg for region masking, which was trained on "an extended version of the PhraseCut dataset" (many thousands of images.)
-
Txt2mask just received a big update!! 🎅
You'll also need to make sure to update your clipseg repo. The script won't do this for you. Namely you just need to update this models/clipseg.py file to ensure your clipseg has support for the new model.
-
[P] Image search with localization and open-vocabulary reranking.
For localisation at search time I ended up using OWL-ViT. This worked really well. I did not try Detic or CLIPseg but would be interested to hear if anyone else has tried these?
-
Who needs prompt2prompt anyway? SD 1.5 inpainting model with clipseg prompt for "hair" and various prompts for different hair colors
clipseg is an image segmentation method used to find a mask for an image from a prompt. I implemented it as an executor for dalle-flow and added it to my bot yasd-discord-bot.
-
txt2mask working in imaginAIry python library
Automated Replacement (txt2mask) by clipseg
- txt2mask was just released! We don't have to use the brush tool for inpainting anymore!
What are some alternatives?
pytorch-widedeep - A flexible package for multimodal-deep-learning to combine tabular data with text and images using Wide and Deep models in Pytorch
stable-diffusion - Latent Text-to-Image Diffusion
CLIP-Caption-Reward - PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)
Detic - Code release for "Detecting Twenty-thousand Classes using Image-level Supervision".
sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
imaginAIry - Pythonic AI generation of images and videos
robo-vln - Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
txt2mask - Automatically create masks for Stable Diffusion inpainting using natural language.
DeepViewAgg - [CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in "Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation"
dalle-flow - 🌊 A Human-in-the-Loop workflow for creating HD images from text
linkis - Apache Linkis builds a computation middleware layer to facilitate connection, governance and orchestration between the upper applications and the underlying data engines.
unprompted - Templating language written for Stable Diffusion workflows. Available as an extension for the Automatic1111 WebUI.