zero-shot-object-tracking
py-motmetrics
zero-shot-object-tracking | py-motmetrics | |
---|---|---|
10 | 1 | |
351 | 1,326 | |
0.9% | - | |
0.6 | 4.4 | |
26 days ago | 20 days ago | |
Python | Python | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zero-shot-object-tracking
-
How to Track Flying Objects?
I’ve seen a bunch of drone-detection computer vision projects. Usually they’re detecting dromes from other drones though (Eg for autonomous racing[1] or drone-defense).
A challenge with doing it from the ground is that the drones will be quite small relative to the size of the image. With sufficient compute and several cameras a tiling-based approach[2] should work!
If you want to do unique-identification you’ll also need object tracking[3].
This is exactly the type of project Roboflow (our startup) is built to empower! Happy to chat/help further (Eg we might be able to help source a good dataset to start from). And if it’s for non-commercial use it should be completely free.
[1] https://blog.roboflow.com/drone-computer-vision-autopilot/
[2] https://blog.roboflow.com/detect-small-objects/
[3] https://blog.roboflow.com/zero-shot-object-tracking/
-
Object tracking in videos?
We use CLIP for object tracking with pretty good results (with no second model train required). https://blog.roboflow.com/zero-shot-object-tracking/
-
Hacker News top posts: Aug 28, 2021
Zero Shot Object Tracking\ (4 comments)
- Need help in camera selection
-
Zero Shot Object Tracking
It uses an object detection model (in our example code[1], we used one from Roboflow Universe[2] but you should be able to use any object detection model) and then sends a crop of each detected box to CLIP to get the feature vector that Deep SORT uses to differentiate between and track instances across frames.
[1] https://github.com/roboflow-ai/zero-shot-object-tracking
[2] https://universe.roboflow.com
-
[P] Zero-Shot Object Tracking with CLIP and Deep SORT
Repo: https://github.com/roboflow-ai/zero-shot-object-tracking
- Zero-Shot Object Tracking with CLIP and Deep SORT
- Show HN: Zero-Shot Object Tracking
py-motmetrics
-
HOW to find MOTA and MOTP for MOT evaluation metrics?
I think this repo is a great starting point for your question: https://github.com/cheind/py-motmetrics
What are some alternatives?
Deep-SORT-YOLOv4 - People detection and optional tracking with Tensorflow backend.
multi-object-tracker - Multi-object trackers in Python
norfair - Lightweight Python library for adding real-time multi-object tracking to any detector.
VolleyVision - Applying Deep Learning Approaches to Volleyball Data
FastMOT - High-performance multiple object tracking based on YOLO, Deep SORT, and KLT 🚀
Yolo_mark - GUI for marking bounded boxes of objects in images for training neural network Yolo v3 and v2
ssd_keras - A Keras port of Single Shot MultiBox Detector
yolov4-deepsort - Object tracking implemented with YOLOv4, DeepSort, and TensorFlow.
ByteTrack - [ECCV 2022] ByteTrack: Multi-Object Tracking by Associating Every Detection Box
classy-sort-yolov5 - Ready-to-use realtime multi-object tracker that works for any object category. YOLOv5 + SORT implementation.