Detic
LAVIS
Detic | LAVIS | |
---|---|---|
11 | 18 | |
1,769 | 8,738 | |
1.0% | 2.4% | |
1.9 | 6.3 | |
about 1 month ago | 14 days ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Detic
-
Autodistill: A new way to create CV models
Some of the foundation/base models include: * GroundedSAM (Segment Anything Model) * DETIC * GroundingDINO
-
[P] Image search with localization and open-vocabulary reranking.
For localisation at search time I ended up using OWL-ViT. This worked really well. I did not try Detic or CLIPseg but would be interested to hear if anyone else has tried these?
-
training object detector using classified images?
git clone https://github.com/facebookresearch/Detic cd Detic pip install -r requirements python demo.py --config-file configs/Detic_LCOCOI21k_CLIP_SwinB_896b32_4x_ft4x_max-size.yaml --input desk.jpg --output out.jpg --vocabulary lvis --opts MODEL.WEIGHTS models/Detic_LCOCOI21k_CLIP_SwinB_896b32_4x_ft4x_max-size.pth
-
[P] Any object detection library
You might want to take a look at DETIC : https://github.com/facebookresearch/Detic (Open Vocabulary Object Detection, trained on thousands of classes)
-
[P] Awesome Image Segmentation Project Based on Deep Learning (5.6k star)
Are there any open-label segmentation model included in this repo, like Detic or LSeg?
-
[R] CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory + Code + Robot demo
We made this using pretty recent advances in web-data pretrained models like Detic and LSeg for detection, CLIP for visual queries, and Sentence BERT for semantic queries. Our "database" is really a neural field (Instant NGP) that maps from 3D coordinates to a high dimensional embedding vector in the same representation space as CLIP and SBERT.
-
[P] Using OpenAI's CLIP repository as a support, I was able to create a software to detect anything in an image at its original resolution!
Is it similar to the open vocabulary detic?
-
Researchers at Meta and the University of Texas at Austin Propose ‘Detic’: A Method to Detect Twenty-Thousand Classes using Image-Level Supervision
Code for https://arxiv.org/abs/2201.02605 found: https://github.com/facebookresearch/Detic
- Detecting Twenty-thousand Classes using Image-level Supervision
-
[R] Detecting Twenty-thousand Classes using Image-level Supervision
github: https://github.com/facebookresearch/Detic
LAVIS
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
-
[D] Why is most Open Source AI happening outside the USA?
For multimodal, there's China (*many), then Salesforce.
-
Need help for a colab notebook running Lavis blip2_instruct_vicuna13b?
Been trying for all day to get a working inference for this example: https://github.com/salesforce/LAVIS/tree/main/projects/instructblip
-
most sane web3 job listing
There's also been big breakthroughs in computer vision. Not that long ago it was hard to recognize if a photo contained a bird; that's solved now by models like CLIP, Yolo, or Segment Anything. Now research has moved on to generating 3D scenes from images or interactively answering questions about images.
-
I work at a non-tech company and have been asked to make software that is impossible. How do I explain it to my boss?
The new hotness is multimodal vision-language models like InstructBLIP that can interactively answer questions about images. Check out the examples in the github repo, I would not have thought this was possible a few years ago.
-
Two-minute Daily AI Update (Date: 5/15/2023)
Salesforce’s BLIP family has a new member– InstructBLIP, a vision-language instruction-tuning framework using BLIP-2 models. It has achieved state-of-the-art zero-shot generalization performance on a wide range of vision-language tasks, substantially outperforming BLIP-2 and Flamingo. (Source)
-
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
Github
-
Can I use my own art as a training set?
Most of my workflows are self-made. For captioning I used Blip-2 in a custom script I made that automates the process by going into directories and their sub-directories and creates a .txt file beside each image. This way I can keep my images organized in their proper directories, without having to put dump them all in a single place.
- FLiP Stack Weekly for 13-Feb-2023
What are some alternatives?
GroundingDINO - Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
pytorch-widedeep - A flexible package for multimodal-deep-learning to combine tabular data with text and images using Wide and Deep models in Pytorch
FasterRCNN - Clean and readable implementations of Faster R-CNN in PyTorch and TensorFlow 2 with Keras.
CLIP-Caption-Reward - PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)
ultralytics - NEW - YOLOv8 🚀 in PyTorch > ONNX > OpenVINO > CoreML > TFLite
sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
segment-anything - The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
robo-vln - Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
clipseg - This repository contains the code of the CVPR 2022 paper "Image Segmentation Using Text and Image Prompts".
DeepViewAgg - [CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in "Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation"
super-gradients - Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.
linkis - Apache Linkis builds a computation middleware layer to facilitate connection, governance and orchestration between the upper applications and the underlying data engines.