opti_models
PyTorch optimizations and benchmarking (by IlyaDobrynin)
torch2trt
An easy to use PyTorch to TensorRT converter (by NVIDIA-AI-IOT)
opti_models | torch2trt | |
---|---|---|
1 | 5 | |
55 | 4,403 | |
- | 1.2% | |
0.0 | 7.6 | |
over 2 years ago | 2 days ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
opti_models
Posts with mentions or reviews of opti_models.
We have used some of these posts to build our list of alternatives
and similar projects.
-
How to get from zero to production ready neural nets?
There are many ways to convert models into different formats for production: ONNX, OPENVINO, TensorRT. Here is a repo with examples: https://github.com/IlyaDobrynin/opti_models
torch2trt
Posts with mentions or reviews of torch2trt.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-10-27.
- [D] How you deploy your ML model?
-
PyTorch 1.10
Main thing you want for server inference is auto batching. It's a feature that's included in onnxruntime, torchserve, nvidia triton inference server and ray serve.
If you have a lot of preprocessing and post logic in your model it can be hard to export it for onnxruntime or triton so I usually recommend starting with Ray Serve (https://docs.ray.io/en/latest/serve/index.html) and using an actor that runs inference with a quantized model or optimized with tensorrt (https://github.com/NVIDIA-AI-IOT/torch2trt)
-
Jetson Nano: TensorFlow model. Possibly I should use PyTorch instead?
https://github.com/NVIDIA-AI-IOT/torch2trt <- pretty straightforward https://github.com/jkjung-avt/tensorrt_demos <- this helped me a lot
-
How to get TensorFlow model to run on Jetson Nano?
I find Pytorch easier to work with generally. Nvidia has a Pytorch --> TensorRT converter which yields some significant speedups and has a simple Python API. Convert the Pytorch model on the Nano.