ONNX-YOLOv7-Object-Detection
yolov7
ONNX-YOLOv7-Object-Detection | yolov7 | |
---|---|---|
2 | 33 | |
182 | 12,769 | |
- | - | |
0.0 | 3.2 | |
about 1 year ago | 10 days ago | |
Python | Jupyter Notebook | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ONNX-YOLOv7-Object-Detection
-
[D] Extracting the class labels and bounding boxes for objects, from a YOLO7 model after converting to an ONNX model
Finally, I tried to look if someone has done similar work for the ONNX model and I found this repo which links the same repo I am trying to use, and I believe this function is doing exactly what I want to do, but I could not understand what it is doing (I don't understand how it knows exactly where the number of detections is, and where the bounding boxes are and the class labels, etc.) furthermore, I am not sure if removing end2end and the changing the version from 12 to 9 has any effect on the output shape or it has to do with the internal layers.
-
YOLOv7 object detection in Ruby in 10 minutes
git clone https://github.com/ibaiGorordo/ONNX-YOLOv7-Object-Detection.git cd ONNX-YOLOv7-Object-Detection pip install -r requirements.txt
yolov7
- FLaNK Stack Weekly 16 October 2023
-
Train a ML model able to identify animal species
If you want something off-the-shelf, try YoloV7.
-
A video based Latin dictionary: get what you see in Latin (beta) - What do you think?
The current dictionary is still in a beta state and has only been trained on 80 words (e.g. 'man', 'dog', 'car', 'keyboard', 'book', etc.; see list of words, see dataset). I used the object detection model Yolov7 (paper, all credits to them).
-
[D] Extracting the class labels and bounding boxes for objects, from a YOLO7 model after converting to an ONNX model
(Please note, this is a re-post of my original question here, I think this subreddit might be more appropriate for asking this question)At work, we use Unity, we have a project that needs object detection and classification. We decided to use this YOLO7 model (for non-technical reasons, It had to be the exact same model as the company does have pre-trained weights for this exact model). However, Unity only supports ONNX so I exported the model as an ONNX model, using the code provided in the repo:
- Coding Question Help
-
DL for the Web: Repository of Models
Github Projects offering pretrained weights and train / run scripts. Example
- [OC] Football Player 3D Pose Estimation using YOLOv7 and Matplotlib
-
Finding a good Tiny Yolo to train in Python
The only project I found is this one that implements Yolov7
-
Visualizing image augmentations from YOLOV7
I'm wondering if there's an efficient way to visualize the image augmentations from the Yolov7 hyperparameters list here
-
Train YOLOv8 ObjectDetection on Custom Dataset Tutorial
yolov7: https://github.com/WongKinYiu/yolov7#performance
What are some alternatives?
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
yolov3 - YOLOv3 in PyTorch > ONNX > CoreML > TFLite
netron - Visualizer for neural network, deep learning and machine learning models
edgetpu - Coral issue tracker (and legacy Edge TPU API source)
onnxruntime-ruby - Run ONNX models in Ruby
edgetpu-yolo - Minimal-dependency Yolov5 export and inference demonstration for the Google Coral EdgeTPU
models - A collection of pre-trained, state-of-the-art models in the ONNX format
YOLOv4 - Port of YOLOv4 to C# + TensorFlow
AS-One - Easy & Modular Computer Vision Detectors and Trackers - Run YOLO-NAS,v8,v7,v6,v5,R,X in under 20 lines of code.
darknet - Convolutional Neural Networks
blink-morse - Computer vision application to type based on detection of eyes blinking morse code.
XMem - [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model