robo-vln
LAVIS
Our great sponsors
robo-vln | LAVIS | |
---|---|---|
2 | 18 | |
61 | 8,738 | |
- | 6.2% | |
2.9 | 6.3 | |
10 months ago | 12 days ago | |
Python | Jupyter Notebook | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
robo-vln
-
[R] Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation
Project Webpage: https://zubair-irshad.github.io/projects/robo-vln.html Pytorch Code and Dataset: https://github.com/GT-RIPL/robo-vln ArXiv paper: https://arxiv.org/abs/2104.1067
LAVIS
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
-
[D] Why is most Open Source AI happening outside the USA?
For multimodal, there's China (*many), then Salesforce.
-
Need help for a colab notebook running Lavis blip2_instruct_vicuna13b?
Been trying for all day to get a working inference for this example: https://github.com/salesforce/LAVIS/tree/main/projects/instructblip
-
most sane web3 job listing
There's also been big breakthroughs in computer vision. Not that long ago it was hard to recognize if a photo contained a bird; that's solved now by models like CLIP, Yolo, or Segment Anything. Now research has moved on to generating 3D scenes from images or interactively answering questions about images.
-
I work at a non-tech company and have been asked to make software that is impossible. How do I explain it to my boss?
The new hotness is multimodal vision-language models like InstructBLIP that can interactively answer questions about images. Check out the examples in the github repo, I would not have thought this was possible a few years ago.
-
Two-minute Daily AI Update (Date: 5/15/2023)
Salesforce’s BLIP family has a new member– InstructBLIP, a vision-language instruction-tuning framework using BLIP-2 models. It has achieved state-of-the-art zero-shot generalization performance on a wide range of vision-language tasks, substantially outperforming BLIP-2 and Flamingo. (Source)
-
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
Github
-
Can I use my own art as a training set?
Most of my workflows are self-made. For captioning I used Blip-2 in a custom script I made that automates the process by going into directories and their sub-directories and creates a .txt file beside each image. This way I can keep my images organized in their proper directories, without having to put dump them all in a single place.
- FLiP Stack Weekly for 13-Feb-2023
What are some alternatives?
hope-autonomous-driving - Autonomous Driving project for Euro Truck Simulator 2 Running on Real World
pytorch-widedeep - A flexible package for multimodal-deep-learning to combine tabular data with text and images using Wide and Deep models in Pytorch
ai-deadlines - :alarm_clock: AI conference deadline countdowns
CLIP-Caption-Reward - PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)
virtual_drawing_board - Virtual whiteboard with hand pose estimation
sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
Mask3D - Mask3D predicts accurate 3D semantic instances achieving state-of-the-art on ScanNet, ScanNet200, S3DIS and STPLS3D.
DeepViewAgg - [CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in "Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation"
IntroDLPython - This repository is updated by a number of introductory projects to deep learning with Python.
linkis - Apache Linkis builds a computation middleware layer to facilitate connection, governance and orchestration between the upper applications and the underlying data engines.
pykitti - Python tools for working with KITTI data.
multimodal - A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"