Video-Swin-Transformer

This is an official implementation for "Video Swin Transformers". (by SwinTransformer)

Video-Swin-Transformer Alternatives

Similar projects and alternatives to Video-Swin-Transformer

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better Video-Swin-Transformer alternative or higher similarity.

Video-Swin-Transformer reviews and mentions

Posts with mentions or reviews of Video-Swin-Transformer. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-02-23.
  • Explanation needed
    1 project | /r/learnmachinelearning | 23 Feb 2022
  • Explanation needed [P]
    1 project | /r/datascience | 23 Feb 2022
  • Explanation needed [R]
    1 project | /r/MachineLearning | 23 Feb 2022
  • Weekly Entering & Transitioning Thread | 20 Feb 2022 - 27 Feb 2022
    4 projects | /r/datascience | 23 Feb 2022
    PROBLEM STATEMENT Develop an efficient common strategy and relevant implementation to extract the video-based models in the black box and grey box setting across the following 2 problem statements. 1.Action Classification Model Extraction for Swin-T Model for Action Classification on Kinetics-400 dataset. Download the model from here- https://github.com/SwinTransformer/Video-Swin-Transformer 2.Video Classification Model Extraction for MoViNet-A2-Base Model for Video Classification on Kinetics- 600 dataset Download the model from here- https://tfhub.dev/tensorflow/movinet/a2/base/kinetics-600/classification/3 Blackbox Setting Do not use any relevant data set available and use synthetic or generated data without using the Kinetics series dataset. Also, do not use the same model architecture as the original model to train the extracted model. Greybox Setting You can use 5% of original data (balanced representation of classes) as a starting point to generate the attack dataset. Also, do not use the same model architecture as the original model to train the extracted model. Can someone explain the problem statement in a easy / understandable way ?? What I think is the models have already been provided and we have to do something in Blackbox and greybox . Can someone explain in brief what we have to do in the blackbox / greybox??
  • Action recognition models for images
    2 projects | /r/deeplearning | 28 Jan 2022
    There are two main variants for the SWIN transformer the original SWIN transformer, official implementation here, and the Video SWIN transformer, official implementation here. Both architectures are very similar with the differences being mainly in the size of the input. The SWIN transformer pretrained on imagenet can be used as the backbone for different applications either image or video-based. In fact, the authors pretrained the original SWIN transformer on imagenet then they modified the input size and then fine-tuned it on video action recognition datasets. In your case, you can use the original SWIN transformer pretrained on imagenet then fine-tune it on your own dataset without modifying anything about the input size, since it is designed to process images.
  • [R] New Study Proposes CW Networks: Greater Expressive Power Than GNNs
    1 project | /r/MachineLearning | 1 Jul 2021
    The code is available on project GitHub. The paper Video Swin Transformer is on arXiv.
  • [R] Video Swin Transformer: SOTA on Video Recognition (84.9% top 1 on Kinetics-400 and 69.6% top 1 on Something-Something V2)
    1 project | /r/MachineLearning | 25 Jun 2021
  • A note from our sponsor - SaaSHub
    www.saashub.com | 28 Apr 2024
    SaaSHub helps you find the best software and product alternatives Learn more →

Stats

Basic Video-Swin-Transformer repo stats
7
1,309
0.0
about 1 year ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com