multimodal
A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal" (by cdancette)
robo-vln
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation" (by GT-RIPL)
Our great sponsors
multimodal | robo-vln | |
---|---|---|
1 | 2 | |
70 | 61 | |
- | - | |
0.0 | 2.9 | |
about 2 years ago | 10 months ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
multimodal
Posts with mentions or reviews of multimodal.
We have used some of these posts to build our list of alternatives
and similar projects.
-
[P] multimodal: a library for VQA / vision and language research
Hi everyone, I am currently building a library for vision & language research: https://github.com/cdancette/multimodal
robo-vln
Posts with mentions or reviews of robo-vln.
We have used some of these posts to build our list of alternatives
and similar projects.
-
[R] Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation
Project Webpage: https://zubair-irshad.github.io/projects/robo-vln.html Pytorch Code and Dataset: https://github.com/GT-RIPL/robo-vln ArXiv paper: https://arxiv.org/abs/2104.1067