trt_pose
torch2trt
Our great sponsors
trt_pose | torch2trt | |
---|---|---|
3 | 5 | |
921 | 4,376 | |
0.0% | 1.4% | |
0.0 | 3.1 | |
over 1 year ago | 24 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
trt_pose
-
[P] Football Player 3D Pose Estimation using YOLOv7
You can try trt_pose rather than YOLO. It's super fast. I am also doing 3D pose estimation, and with trt_pose I get the 2D at more than 100fps. https://github.com/NVIDIA-AI-IOT/trt_pose
-
How to setup gesture recognition on the Nano?
I just followed the setup instructions for TRT_pose and then TRT_pose_hand and it was working great for me! You will need to follow these steps first to get TRT_pose on your device.
-
How to use the Nano and openpose to detect the humans bodies inside footages for removing the colors that don't belong to them
I think you are looking for trt_pose, available here: https://github.com/NVIDIA-AI-IOT/trt_pose. I think this is the best library for real time pose tracking for the nano.
torch2trt
- [D] How you deploy your ML model?
-
PyTorch 1.10
Main thing you want for server inference is auto batching. It's a feature that's included in onnxruntime, torchserve, nvidia triton inference server and ray serve.
If you have a lot of preprocessing and post logic in your model it can be hard to export it for onnxruntime or triton so I usually recommend starting with Ray Serve (https://docs.ray.io/en/latest/serve/index.html) and using an actor that runs inference with a quantized model or optimized with tensorrt (https://github.com/NVIDIA-AI-IOT/torch2trt)
-
Jetson Nano: TensorFlow model. Possibly I should use PyTorch instead?
https://github.com/NVIDIA-AI-IOT/torch2trt <- pretty straightforward https://github.com/jkjung-avt/tensorrt_demos <- this helped me a lot
-
How to get TensorFlow model to run on Jetson Nano?
I find Pytorch easier to work with generally. Nvidia has a Pytorch --> TensorRT converter which yields some significant speedups and has a simple Python API. Convert the Pytorch model on the Nano.
What are some alternatives?
lightweight-human-pose-estimation.pytorch - Fast and accurate human pose estimation in PyTorch. Contains implementation of "Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose" paper.
TensorRT - PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
U-2-Net - The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."
onnx-simplifier - Simplify your onnx model
trt_pose_hand - Real-time hand pose estimation and gesture classification using TensorRT
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
Real-time-GesRec - Real-time Hand Gesture Recognition with PyTorch on EgoGesture, NvGesture, Jester, Kinetics and UCF101
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
CSI-Camera - Simple example of using a CSI-Camera (like the Raspberry Pi Version 2 camera) with the NVIDIA Jetson Developer Kit
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
yolo_tracking - BoxMOT: pluggable SOTA tracking modules for segmentation, object detection and pose estimation models
tensorrt_demos - TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet