PaddleClas
Video-Swin-Transformer
PaddleClas | Video-Swin-Transformer | |
---|---|---|
2 | 7 | |
5,262 | 1,309 | |
0.7% | 0.0% | |
5.4 | 0.0 | |
7 days ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
PaddleClas
-
Baidu AI Research Team Introduces ‘PP-ShiTu’: A Practical Lightweight Image Recognition System
Quick 5 Min Read | Paper | Github
-
Baidu Research Introduces PP-LCNet: A Lightweight CPU Convolutional Neural Network With Better Accuracy And Performance
3 Min Quick Read | Paper| Github PaddleClas
Video-Swin-Transformer
- Explanation needed
- Explanation needed [P]
- Explanation needed [R]
-
Weekly Entering & Transitioning Thread | 20 Feb 2022 - 27 Feb 2022
PROBLEM STATEMENT Develop an efficient common strategy and relevant implementation to extract the video-based models in the black box and grey box setting across the following 2 problem statements. 1.Action Classification Model Extraction for Swin-T Model for Action Classification on Kinetics-400 dataset. Download the model from here- https://github.com/SwinTransformer/Video-Swin-Transformer 2.Video Classification Model Extraction for MoViNet-A2-Base Model for Video Classification on Kinetics- 600 dataset Download the model from here- https://tfhub.dev/tensorflow/movinet/a2/base/kinetics-600/classification/3 Blackbox Setting Do not use any relevant data set available and use synthetic or generated data without using the Kinetics series dataset. Also, do not use the same model architecture as the original model to train the extracted model. Greybox Setting You can use 5% of original data (balanced representation of classes) as a starting point to generate the attack dataset. Also, do not use the same model architecture as the original model to train the extracted model. Can someone explain the problem statement in a easy / understandable way ?? What I think is the models have already been provided and we have to do something in Blackbox and greybox . Can someone explain in brief what we have to do in the blackbox / greybox??
-
Action recognition models for images
There are two main variants for the SWIN transformer the original SWIN transformer, official implementation here, and the Video SWIN transformer, official implementation here. Both architectures are very similar with the differences being mainly in the size of the input. The SWIN transformer pretrained on imagenet can be used as the backbone for different applications either image or video-based. In fact, the authors pretrained the original SWIN transformer on imagenet then they modified the input size and then fine-tuned it on video action recognition datasets. In your case, you can use the original SWIN transformer pretrained on imagenet then fine-tune it on your own dataset without modifying anything about the input size, since it is designed to process images.
-
[R] New Study Proposes CW Networks: Greater Expressive Power Than GNNs
The code is available on project GitHub. The paper Video Swin Transformer is on arXiv.
- [R] Video Swin Transformer: SOTA on Video Recognition (84.9% top 1 on Kinetics-400 and 69.6% top 1 on Something-Something V2)
What are some alternatives?
Swin-Transformer-Object-Detection - This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Object Detection and Instance Segmentation.
Swin-Transformer-Tensorflow - Unofficial implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" (https://arxiv.org/abs/2103.14030)
Swin-Transformer-Semantic-Segmentation - This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Semantic Segmentation.
MoViNet-pytorch - MoViNets PyTorch implementation: Mobile Video Networks for Efficient Video Recognition;
efficientnet - Implementation of EfficientNet model. Keras and TensorFlow Keras.
DWPose - "Effective Whole-body Pose Estimation with Two-stages Distillation" (ICCV 2023, CV4Metaverse Workshop)
Swin-Transformer - This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
HugsVision - HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision
data - Data and code behind the articles and graphics at FiveThirtyEight