CoCa-pytorch
InternVideo
CoCa-pytorch | InternVideo | |
---|---|---|
1 | 3 | |
975 | 933 | |
- | 10.9% | |
6.2 | 8.0 | |
5 months ago | 3 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CoCa-pytorch
InternVideo
-
[Demo] Watch Videos with ChatGPT
Thanks for your interest! If you had any ideas to make the given demo more user-friendly, please do not hesitate to share them with us. We are open to discussing relevant ideas about video foundation models or other topics. We made some progress in these areas (InternVideo, VideoMAE v2, UMT, and more). We believe that user-level intelligent video understanding is on the horizon with the current LLM, computing power, and video data.
-
[R] InternVideo: General Video Foundation Models via Generative and Discriminative Learning
Found relevant code at https://github.com/OpenGVLab/InternVideo + all code implementations here
The foundation models have recently shown excellent performance on a variety of downstream tasks in computer vision. However, most existing vision foundation models simply focus on image-level pretraining and adaption, which are limited for dynamic and complex video-level understanding tasks. To fill the gap, we present general video foundation models, InternVideo, by taking advantage of both generative and discriminative self-supervised video learning. Specifically, InternVideo efficiently explores masked video modeling and video-language contrastive learning as the pretraining objectives, and selectively coordinates video representations of these two complementary frameworks in a learnable manner to boost various video applications. Without bells and whistles, InternVideo achieves state-of-the-art performance on 39 video datasets from extensive tasks including video action recognition/detection, video-language alignment, and open-world video applications. Especially, our methods can obtain 91.1% and 77.2% top-1 accuracy on the challenging Kinetics-400 and Something-Something V2 benchmarks, respectively. All of these results effectively show the generality of our InternVideo for video understanding. The code will be released at https://github.com/OpenGVLab/InternVideo.
What are some alternatives?
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
VideoMAEv2 - [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
RETRO-pytorch - Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
mmaction2 - OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
TimeSformer-pytorch - Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification
LLaVA - [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
PaLM-pytorch - Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways
ego4d-eccv2022-solutions - Champion Solutions for Ego4D Chanllenge of ECCV 2022
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
ALPRO - Align and Prompt: Video-and-Language Pre-training with Entity Prompts
x-clip - A concise but complete implementation of CLIP with various experimental improvements from recent papers
MiniGPT-4 - Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)