Onnx-tensorrt Alternatives
Similar projects and alternatives to onnx-tensorrt
-
onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
-
TensorRT
NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.
-
Onboard AI
Learn any GitHub repo in 59 seconds. Onboard AI learns any GitHub repo in minutes and lets you chat with it to locate functionality, understand different parts, and generate new code. Use it for free at www.getonboard.dev.
-
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)
-
jetson-inference
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
-
-
deepC
vendor independent TinyML deep learning library, compiler and inference framework microcomputers and micro-controllers
-
-
SonarCloud
Analyze your C and C++ projects with just one click.. SonarCloud, a cloud-based static analysis tool for your CI/CD workflows, offers a one-click automatic analysis of C and C++ projects hosted on GitHub. Zero configuration and free for open-source projects! Analyze free.
-
gl_cadscene_rendertechniques
OpenGL sample on various rendering approaches for typical CAD scenes
onnx-tensorrt reviews and mentions
-
[P] [D]How to get TensorFlow model to run on Jetson Nano?
Conversion was done from Keras Tensorflow using to ONNX https://github.com/onnx/keras-onnx followed by ONNX to TensorRT using https://github.com/onnx/onnx-tensorrt The Python code used for inference using TensorRT can be found at https://github.com/jonnor/modeld/blob/tensorrt/tensorrtutils.py
Stats
onnx/onnx-tensorrt is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of onnx-tensorrt is C++.