trt_pose VS torch2trt

Compare trt_pose vs torch2trt and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
trt_pose torch2trt
3 5
921 4,388
0.0% 1.7%
0.0 3.1
over 1 year ago about 1 month ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

trt_pose

Posts with mentions or reviews of trt_pose. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-04-21.

torch2trt

Posts with mentions or reviews of torch2trt. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-10-27.
  • [D] How you deploy your ML model?
    5 projects | /r/MachineLearning | 27 Oct 2021
  • PyTorch 1.10
    8 projects | news.ycombinator.com | 22 Oct 2021
    Main thing you want for server inference is auto batching. It's a feature that's included in onnxruntime, torchserve, nvidia triton inference server and ray serve.

    If you have a lot of preprocessing and post logic in your model it can be hard to export it for onnxruntime or triton so I usually recommend starting with Ray Serve (https://docs.ray.io/en/latest/serve/index.html) and using an actor that runs inference with a quantized model or optimized with tensorrt (https://github.com/NVIDIA-AI-IOT/torch2trt)

  • Jetson Nano: TensorFlow model. Possibly I should use PyTorch instead?
    2 projects | /r/pytorch | 4 Jun 2021
    https://github.com/NVIDIA-AI-IOT/torch2trt <- pretty straightforward https://github.com/jkjung-avt/tensorrt_demos <- this helped me a lot
  • How to get TensorFlow model to run on Jetson Nano?
    1 project | /r/computervision | 4 Jun 2021
    I find Pytorch easier to work with generally. Nvidia has a Pytorch --> TensorRT converter which yields some significant speedups and has a simple Python API. Convert the Pytorch model on the Nano.

What are some alternatives?

When comparing trt_pose and torch2trt you can also consider the following projects:

lightweight-human-pose-estimation.pytorch - Fast and accurate human pose estimation in PyTorch. Contains implementation of "Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose" paper.

TensorRT - PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

U-2-Net - The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."

onnx-simplifier - Simplify your onnx model

trt_pose_hand - Real-time hand pose estimation and gesture classification using TensorRT

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

Real-time-GesRec - Real-time Hand Gesture Recognition with PyTorch on EgoGesture, NvGesture, Jester, Kinetics and UCF101

transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀

yolo_tracking - BoxMOT: pluggable SOTA tracking modules for segmentation, object detection and pose estimation models

onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

CSI-Camera - Simple example of using a CSI-Camera (like the Raspberry Pi Version 2 camera) with the NVIDIA Jetson Developer Kit

tensorrt_demos - TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet