onnx-tensorrt
modeld
Our great sponsors
onnx-tensorrt | modeld | |
---|---|---|
4 | 1 | |
2,749 | 0 | |
2.1% | - | |
4.1 | 0.0 | |
23 days ago | almost 3 years ago | |
C++ | ||
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
onnx-tensorrt
-
Introducing Cellulose - an ONNX model visualizer with hardware runtime support annotations
[1] - We use onnx-tensorrt for this TensorRT compatibility checks.
-
[P] [D]How to get TensorFlow model to run on Jetson Nano?
Conversion was done from Keras Tensorflow using to ONNX https://github.com/onnx/keras-onnx followed by ONNX to TensorRT using https://github.com/onnx/onnx-tensorrt The Python code used for inference using TensorRT can be found at https://github.com/jonnor/modeld/blob/tensorrt/tensorrtutils.py
-
New to this: could I use Nvidia Nano + lobe?
Hi! You can run the models trained in Lobe on the Jetson Nano, either through TensorFlow (https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html), ONNX runtime (https://elinux.org/Jetson_Zoo#ONNX_Runtime), or running ONNX on TensorRT (https://github.com/onnx/onnx-tensorrt).
-
How to install ONNX-TensorRT Python Backend on Jetpack 4.5
Hello, I would like to install https://github.com/onnx/onnx-tensorrt from a package because compiling is a lot of complicated. Is there any source for this package?
modeld
-
[P] [D]How to get TensorFlow model to run on Jetson Nano?
Conversion was done from Keras Tensorflow using to ONNX https://github.com/onnx/keras-onnx followed by ONNX to TensorRT using https://github.com/onnx/onnx-tensorrt The Python code used for inference using TensorRT can be found at https://github.com/jonnor/modeld/blob/tensorrt/tensorrtutils.py
What are some alternatives?
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
keras-onnx - Convert tf.keras/Keras models to ONNX
jetson-inference - Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
deepC - vendor independent TinyML deep learning library, compiler and inference framework microcomputers and micro-controllers
gl_cadscene_rendertechniques - OpenGL sample on various rendering approaches for typical CAD scenes