Unicorn
ECCV22-P3AFormer-Tracking-Objects-as-Pixel-wise-Distributions
Our great sponsors
Unicorn | ECCV22-P3AFormer-Tracking-Objects-as-Pixel-wise-Distributions | |
---|---|---|
7 | 1 | |
942 | 157 | |
- | 3.2% | |
0.0 | 0.0 | |
over 1 year ago | over 1 year ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Unicorn
-
need help with object detection and object tracking using yolov4
Also check out Unicorn - https://github.com/MasterBin-IIAU/Unicorn
- [D] Most Popular AI Research July 2022 pt. 2 - Ranked Based On GitHub Stars
- Most Popular AI Research July 2022 pt. 2 - Ranked Based On GitHub Stars
-
Researchers from Bytedance and Dalian University Propose 🦄 ‘Unicorn’: a Unified Computer Vision Approach to Address Four Tracking Tasks Using a Single Model with the Same Model Parameters
Continue reading | Checkout the paper and github link
-
[R] Unicorn: 🦄 : Towards Grand Unification of Object Tracking(Video Demo)
Brief Overview We present a unified method, termed Unicorn, that can simultaneously solve four tracking problems (SOT, MOT, VOS, MOTS) with a single network using the same model parameters. For the first time, we accomplished the great unification of the tracking network architecture and learning paradigm. Unicorn performs on-par or better than its task-specific counterparts in 8 tracking datasets, including LaSOT, TrackingNet, MOT17, BDD100K, DAVIS16-17, MOTS20, and BDD100K MOTS. Our work is accepted to ECCV 2022 as an oral presentation ! Paper: https://arxiv.org/abs/2207.07078 Code: https://github.com/MasterBin-IIAU/Unicorn
-
[R] Unicorn: 🦄 : Towards Grand Unification of Object Tracking
Code for https://arxiv.org/abs/2207.07078 found: https://github.com/MasterBin-IIAU/Unicorn
ECCV22-P3AFormer-Tracking-Objects-as-Pixel-wise-Distributions
-
[D] Approaches to new code: create a map of the code structure, does it make sense?
An example can be found here: https://github.com/dvlab-research/ECCV22-P3AFormer-Tracking-Objects-as-Pixel-wise-Distributions/raw/main/figs/model_mind_flow.png
What are some alternatives?
deeplab2 - DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.
VNext - Next-generation Video instance recognition framework on top of Detectron2 which supports InstMove (CVPR 2023), SeqFormer(ECCV Oral), and IDOL(ECCV Oral))
XMem - [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
py-motmetrics - :bar_chart: Benchmark multiple object trackers (MOT) in Python
theseus - A library for differentiable nonlinear optimization
classy-sort-yolov5 - Ready-to-use realtime multi-object tracker that works for any object category. YOLOv5 + SORT implementation.
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
deep_sort_pytorch - MOT using deepsort and yolov3 with pytorch
yolov7 - Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
NUWA - A unified 3D Transformer Pipeline for visual synthesis
hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
Cream - This is a collection of our NAS and Vision Transformer work. [Moved to: https://github.com/microsoft/AutoML]