py-motmetrics
zero-shot-object-tracking
Our great sponsors
py-motmetrics | zero-shot-object-tracking | |
---|---|---|
1 | 10 | |
1,321 | 348 | |
- | 0.9% | |
4.9 | 0.6 | |
about 2 months ago | 9 days ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
py-motmetrics
-
HOW to find MOTA and MOTP for MOT evaluation metrics?
I think this repo is a great starting point for your question: https://github.com/cheind/py-motmetrics
zero-shot-object-tracking
-
How to Track Flying Objects?
I’ve seen a bunch of drone-detection computer vision projects. Usually they’re detecting dromes from other drones though (Eg for autonomous racing[1] or drone-defense).
A challenge with doing it from the ground is that the drones will be quite small relative to the size of the image. With sufficient compute and several cameras a tiling-based approach[2] should work!
If you want to do unique-identification you’ll also need object tracking[3].
This is exactly the type of project Roboflow (our startup) is built to empower! Happy to chat/help further (Eg we might be able to help source a good dataset to start from). And if it’s for non-commercial use it should be completely free.
[1] https://blog.roboflow.com/drone-computer-vision-autopilot/
-
Object tracking in videos?
We use CLIP for object tracking with pretty good results (with no second model train required). https://blog.roboflow.com/zero-shot-object-tracking/
-
Zero Shot Object Tracking
It uses an object detection model (in our example code[1], we used one from Roboflow Universe[2] but you should be able to use any object detection model) and then sends a crop of each detected box to CLIP to get the feature vector that Deep SORT uses to differentiate between and track instances across frames.
[1] https://github.com/roboflow-ai/zero-shot-object-tracking
We haven’t had a chance to run it through eval on standard datasets yet. I’d like to compare it to some of these: https://paperswithcode.com/task/multi-object-tracking
The code is available here if anyone wants to give it a go before we can get to it: https://github.com/roboflow-ai/zero-shot-object-tracking
What are some alternatives?
Deep-SORT-YOLOv4 - People detection and optional tracking with Tensorflow backend.
norfair - Lightweight Python library for adding real-time multi-object tracking to any detector.
FastMOT - High-performance multiple object tracking based on YOLO, Deep SORT, and KLT 🚀
ssd_keras - A Keras port of Single Shot MultiBox Detector
ByteTrack - [ECCV 2022] ByteTrack: Multi-Object Tracking by Associating Every Detection Box
yolov4-deepsort - Object tracking implemented with YOLOv4, DeepSort, and TensorFlow.
classy-sort-yolov5 - Ready-to-use realtime multi-object tracker that works for any object category. YOLOv5 + SORT implementation.
yolo-tf2 - yolo(all versions) implementation in keras and tensorflow 2.x
multi-object-tracker - Multi-object trackers in Python
VolleyVision - Applying Deep Learning Approaches to Volleyball Data
Yolo_mark - GUI for marking bounded boxes of objects in images for training neural network Yolo v3 and v2