Paddle2ONNX
ONNX Model Exporter for PaddlePaddle (by PaddlePaddle)
onnx-simplifier
Simplify your onnx model (by daquexian)
Paddle2ONNX | onnx-simplifier | |
---|---|---|
1 | 3 | |
650 | 3,564 | |
3.5% | - | |
8.4 | 6.5 | |
9 days ago | 26 days ago | |
Python | C++ | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Paddle2ONNX
Posts with mentions or reviews of Paddle2ONNX.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-06-12.
onnx-simplifier
Posts with mentions or reviews of onnx-simplifier.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-11-20.
-
Show: Cross-platform Image segmentation on video using eGUI, onnxruntime and ffmpeg
onnx-simplifier can shed some of incompatibilities in widespread use, but is itself bug ridden and lagging behind the standard. For any serious model, or when you don't get lucky simplifying the model upstream, you'd generally want good support of opset 11.
-
[Technical Article] OCR Upgrade
ONNX Simplifier:https://github.com/daquexian/onnx-simplifier
-
PyTorch 1.10
As far as I know, the ONNX format won't give you a performance boost on its own. However, there are ONNX optimizers for the ONNX runtime which will speed up your inference.
But if you are using Nvidia Hardware, then TensorRT should give you the best performance possible, especially if you change the precision level. Don't forget to simplify your ONNX model before you converting it to TensorRT though: https://github.com/daquexian/onnx-simplifier
What are some alternatives?
When comparing Paddle2ONNX and onnx-simplifier you can also consider the following projects:
deepin-ocr
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator