InternVideo
mmaction2
InternVideo | mmaction2 | |
---|---|---|
3 | 5 | |
1,338 | 4,212 | |
5.6% | 2.1% | |
8.7 | 3.5 | |
17 days ago | about 2 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
InternVideo
-
[Demo] Watch Videos with ChatGPT
Thanks for your interest! If you had any ideas to make the given demo more user-friendly, please do not hesitate to share them with us. We are open to discussing relevant ideas about video foundation models or other topics. We made some progress in these areas (InternVideo, VideoMAE v2, UMT, and more). We believe that user-level intelligent video understanding is on the horizon with the current LLM, computing power, and video data.
-
[R] InternVideo: General Video Foundation Models via Generative and Discriminative Learning
Found relevant code at https://github.com/OpenGVLab/InternVideo + all code implementations here
The foundation models have recently shown excellent performance on a variety of downstream tasks in computer vision. However, most existing vision foundation models simply focus on image-level pretraining and adaption, which are limited for dynamic and complex video-level understanding tasks. To fill the gap, we present general video foundation models, InternVideo, by taking advantage of both generative and discriminative self-supervised video learning. Specifically, InternVideo efficiently explores masked video modeling and video-language contrastive learning as the pretraining objectives, and selectively coordinates video representations of these two complementary frameworks in a learnable manner to boost various video applications. Without bells and whistles, InternVideo achieves state-of-the-art performance on 39 video datasets from extensive tasks including video action recognition/detection, video-language alignment, and open-world video applications. Especially, our methods can obtain 91.1% and 77.2% top-1 accuracy on the challenging Kinetics-400 and Something-Something V2 benchmarks, respectively. All of these results effectively show the generality of our InternVideo for video understanding. The code will be released at https://github.com/OpenGVLab/InternVideo.
mmaction2
-
How good does contextual action recognition get?
Mmaction2: https://github.com/open-mmlab/mmaction2 Has some examples.
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
-
[D] Deep Learning Framework for C++.
I agree with you for most of the time this can work but there are some models that have certain layers that are not supported by ONNX. An example would be Spatiotemporal models in mmaction2 from open-mmlab.
-
Textbook or blogs for video understanding
No book or blog, but a great framework: https://github.com/open-mmlab/mmaction2
-
Applications of Deep Neural Networks [pdf]
shameless ad: try mmaction2, where every result is reproducible https://github.com/open-mmlab/mmaction2 . Modelzoo: https://mmaction2.readthedocs.io/en/latest/modelzoo.html
What are some alternatives?
VideoMAEv2 - [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
mmpose - OpenMMLab Pose Estimation Toolbox and Benchmark.
CoCa-pytorch - Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
temporal-shift-module - [ICCV 2019] TSM: Temporal Shift Module for Efficient Video Understanding
ALPRO - Align and Prompt: Video-and-Language Pre-training with Entity Prompts
compare_gan - Compare GAN code.
text-to-image-eval - Evaluate custom and HuggingFace text-to-image/zero-shot-image-classification models like CLIP, SigLIP, DFN5B, and EVA-CLIP. Metrics include Zero-shot accuracy, Linear Probe, Image retrieval, and KNN accuracy.
mmflow - OpenMMLab optical flow toolbox and benchmark
EgoVideo - [CVPR 2024 Champions] Solutions for EgoVis Chanllenges in CVPR 2024
Video-Dataset-Loading-Pytorch - Generic PyTorch dataset implementation to load and augment VIDEOS for deep learning training loops.
phar - deep learning sex position classifier
mmrotate - OpenMMLab Rotated Object Detection Toolbox and Benchmark