onnx-tensorrt
deepC
Our great sponsors
onnx-tensorrt | deepC | |
---|---|---|
4 | 2 | |
2,749 | 505 | |
2.1% | 0.0% | |
4.1 | 0.0 | |
22 days ago | over 1 year ago | |
C++ | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
onnx-tensorrt
-
Introducing Cellulose - an ONNX model visualizer with hardware runtime support annotations
[1] - We use onnx-tensorrt for this TensorRT compatibility checks.
-
[P] [D]How to get TensorFlow model to run on Jetson Nano?
Conversion was done from Keras Tensorflow using to ONNX https://github.com/onnx/keras-onnx followed by ONNX to TensorRT using https://github.com/onnx/onnx-tensorrt The Python code used for inference using TensorRT can be found at https://github.com/jonnor/modeld/blob/tensorrt/tensorrtutils.py
-
New to this: could I use Nvidia Nano + lobe?
Hi! You can run the models trained in Lobe on the Jetson Nano, either through TensorFlow (https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html), ONNX runtime (https://elinux.org/Jetson_Zoo#ONNX_Runtime), or running ONNX on TensorRT (https://github.com/onnx/onnx-tensorrt).
-
How to install ONNX-TensorRT Python Backend on Jetpack 4.5
Hello, I would like to install https://github.com/onnx/onnx-tensorrt from a package because compiling is a lot of complicated. Is there any source for this package?
deepC
-
[D] Run Pytorch model inference on Microcontroller
DeepC. Open source version of DeepSea. Very little activity, looks abandoned
-
C with Deep Learning
You could try things like deepC but that is again C++ https://github.com/ai-techsystems/deepC
What are some alternatives?
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
stm32mp1-baremetal - Baremetal framework and example projects for the STM32MP15x Cortex-A7 based MPU
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
SI4735 - SI473X Library for Arduino
jetson-inference - Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
darknet - Convolutional Neural Networks
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
notepad2 - Notepad2-zufuliu is a light-weight Scintilla based text editor for Windows with syntax highlighting, code folding, auto-completion and API list for many programming languages and documents, bundled with file browser plugin metapath-zufuliu.
keras-onnx - Convert tf.keras/Keras models to ONNX
onnx2c - Open Neural Network Exchange to C compiler.
modeld - Self driving car lane and path detection
STM32_Base_Project - STM32 Base project with a lot of stuff