zero-shot-object-tracking
multi-object-tracker
zero-shot-object-tracking | multi-object-tracker | |
---|---|---|
10 | 4 | |
350 | 665 | |
1.4% | - | |
0.6 | 6.4 | |
18 days ago | 7 months ago | |
Python | Python | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zero-shot-object-tracking
-
How to Track Flying Objects?
I’ve seen a bunch of drone-detection computer vision projects. Usually they’re detecting dromes from other drones though (Eg for autonomous racing[1] or drone-defense).
A challenge with doing it from the ground is that the drones will be quite small relative to the size of the image. With sufficient compute and several cameras a tiling-based approach[2] should work!
If you want to do unique-identification you’ll also need object tracking[3].
This is exactly the type of project Roboflow (our startup) is built to empower! Happy to chat/help further (Eg we might be able to help source a good dataset to start from). And if it’s for non-commercial use it should be completely free.
[1] https://blog.roboflow.com/drone-computer-vision-autopilot/
[2] https://blog.roboflow.com/detect-small-objects/
[3] https://blog.roboflow.com/zero-shot-object-tracking/
-
Object tracking in videos?
We use CLIP for object tracking with pretty good results (with no second model train required). https://blog.roboflow.com/zero-shot-object-tracking/
-
Hacker News top posts: Aug 28, 2021
Zero Shot Object Tracking\ (4 comments)
- Need help in camera selection
-
Zero Shot Object Tracking
It uses an object detection model (in our example code[1], we used one from Roboflow Universe[2] but you should be able to use any object detection model) and then sends a crop of each detected box to CLIP to get the feature vector that Deep SORT uses to differentiate between and track instances across frames.
[1] https://github.com/roboflow-ai/zero-shot-object-tracking
[2] https://universe.roboflow.com
-
[P] Zero-Shot Object Tracking with CLIP and Deep SORT
Repo: https://github.com/roboflow-ai/zero-shot-object-tracking
- Zero-Shot Object Tracking with CLIP and Deep SORT
- Show HN: Zero-Shot Object Tracking
multi-object-tracker
- Multi-object trackers in Python
-
Difference DeepSort and doing detection on each frame
You may find this useful: https://adipandas.github.io/multi-object-tracker/
-
SORT Tracker adds extra objects
As you can see, in Frame 43, the tracker assigns the ID 10 to a metal post, but a few frames later when the tracker tracks the same metal post, it provides an ID of 11. This shows that the tracker is assigning new IDs to the same detected object. I cannot seem to find out why this happens, and how to fix it. I am using the motrackers module from this repo. I have not made any changes as such in the mot_yolov3.py file, I have just added the line to print the frame counter.
-
Example Of A Simple And Well Made Python Project
I have a simple python project which has gone through at least 5 iterations since I began working on it. Please see this link: adipandas/multi-object-tracker.
What are some alternatives?
Deep-SORT-YOLOv4 - People detection and optional tracking with Tensorflow backend.
ByteTrack - [ECCV 2022] ByteTrack: Multi-Object Tracking by Associating Every Detection Box
norfair - Lightweight Python library for adding real-time multi-object tracking to any detector.
FastMOT - High-performance multiple object tracking based on YOLO, Deep SORT, and KLT 🚀
ssd_keras - A Keras port of Single Shot MultiBox Detector
yolov4-deepsort - Object tracking implemented with YOLOv4, DeepSort, and TensorFlow.
Kornia - Geometric Computer Vision Library for Spatial AI
Face Recognition - The world's simplest facial recognition api for Python and the command line