jetson
Self-driving AI toy car 🤖🚗. (by gsurma)
TensorRT
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT (by pytorch)
jetson | TensorRT | |
---|---|---|
1 | 5 | |
83 | 2,343 | |
- | 1.9% | |
0.0 | 9.5 | |
almost 3 years ago | 1 day ago | |
Jupyter Notebook | Python | |
MIT License | BSD 3-clause "New" or "Revised" License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
jetson
Posts with mentions or reviews of jetson.
We have used some of these posts to build our list of alternatives
and similar projects.
-
What should I use in my self-driving car, Arduino or Raspberry Pi?
Try Nvidia’s jetson nano (https://github.com/gsurma/jetson)
TensorRT
Posts with mentions or reviews of TensorRT.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-02-06.
- Learn TensorRT optimization
- I made TensorRT example. I hope this will help beginners. And I also have a question about TensorRT best practice.
- [P] [D] I made TensorRT example. I hope this will help beginners. And I also have a question about TensorRT best practice.
-
[P] 4.5 times faster Hugging Face transformer inference by modifying some Python AST
Have you tried the new Torch-TensorRT compiler from NVIDIA?
-
PyTorch 1.10
You can quantize your model to FP16 or Int8 using PTQ as well and it should give you an additional speed up inference wise.
Here is a tutorial[2] to leverage TRTorch.
[1] https://github.com/NVIDIA/TRTorch/tree/master/core
What are some alternatives?
When comparing jetson and TensorRT you can also consider the following projects:
trt_pose_hand - Real-time hand pose estimation and gesture classification using TensorRT
torch2trt - An easy to use PyTorch to TensorRT converter